Feb 9 09:46:44.065556 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 09:46:44.065599 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:46:44.065623 kernel: efi: EFI v2.70 by EDK II Feb 9 09:46:44.065639 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 09:46:44.065653 kernel: ACPI: Early table checksum verification disabled Feb 9 09:46:44.065667 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 09:46:44.065683 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 09:46:44.065697 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 09:46:44.065711 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 09:46:44.065725 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 09:46:44.065744 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 09:46:44.065759 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 09:46:44.065773 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 09:46:44.065787 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 09:46:44.065804 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 09:46:44.065846 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 09:46:44.065862 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 09:46:44.065877 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 09:46:44.065892 kernel: printk: bootconsole [uart0] enabled Feb 9 09:46:44.065907 kernel: NUMA: Failed to initialise from firmware Feb 9 09:46:44.065922 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:46:44.065937 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 09:46:44.065952 kernel: Zone ranges: Feb 9 09:46:44.065968 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 09:46:44.065983 kernel: DMA32 empty Feb 9 09:46:44.065997 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 09:46:44.066017 kernel: Movable zone start for each node Feb 9 09:46:44.066031 kernel: Early memory node ranges Feb 9 09:46:44.066046 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 09:46:44.066060 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 09:46:44.066075 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 09:46:44.066090 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 09:46:44.066104 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 09:46:44.066118 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 09:46:44.066133 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:46:44.066147 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 09:46:44.066162 kernel: psci: probing for conduit method from ACPI. Feb 9 09:46:44.066176 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 09:46:44.066195 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:46:44.066210 kernel: psci: Trusted OS migration not required Feb 9 09:46:44.066232 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:46:44.066248 kernel: ACPI: SRAT not present Feb 9 09:46:44.066264 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:46:44.066284 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:46:44.066312 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:46:44.066335 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:46:44.066351 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:46:44.066367 kernel: CPU features: detected: Spectre-v2 Feb 9 09:46:44.066382 kernel: CPU features: detected: Spectre-v3a Feb 9 09:46:44.066397 kernel: CPU features: detected: Spectre-BHB Feb 9 09:46:44.066412 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:46:44.066428 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:46:44.066443 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 09:46:44.066459 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 09:46:44.066501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 09:46:44.066518 kernel: Policy zone: Normal Feb 9 09:46:44.066537 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:46:44.066554 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:46:44.066569 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:46:44.066585 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:46:44.066601 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:46:44.066617 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 09:46:44.066633 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 09:46:44.066649 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:46:44.066668 kernel: trace event string verifier disabled Feb 9 09:46:44.066684 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:46:44.066699 kernel: rcu: RCU event tracing is enabled. Feb 9 09:46:44.066715 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:46:44.066731 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:46:44.066747 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:46:44.066762 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:46:44.066778 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:46:44.066793 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:46:44.066809 kernel: GICv3: 96 SPIs implemented Feb 9 09:46:44.066823 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:46:44.066839 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:46:44.066858 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:46:44.066873 kernel: GICv3: 16 PPIs implemented Feb 9 09:46:44.066888 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 09:46:44.066903 kernel: ACPI: SRAT not present Feb 9 09:46:44.066918 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 09:46:44.066933 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:46:44.066949 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:46:44.066964 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 09:46:44.066979 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 09:46:44.066994 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 09:46:44.067010 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 09:46:44.067030 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 09:46:44.067046 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 09:46:44.067061 kernel: Console: colour dummy device 80x25 Feb 9 09:46:44.067078 kernel: printk: console [tty1] enabled Feb 9 09:46:44.067094 kernel: ACPI: Core revision 20210730 Feb 9 09:46:44.067110 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 09:46:44.067126 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:46:44.067142 kernel: LSM: Security Framework initializing Feb 9 09:46:44.067158 kernel: SELinux: Initializing. Feb 9 09:46:44.067174 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:46:44.067195 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:46:44.067210 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:46:44.067226 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 09:46:44.067242 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 09:46:44.067257 kernel: Remapping and enabling EFI services. Feb 9 09:46:44.067273 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:46:44.067289 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:46:44.067305 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 09:46:44.067321 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 09:46:44.067341 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 09:46:44.067357 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:46:44.067372 kernel: SMP: Total of 2 processors activated. Feb 9 09:46:44.067388 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:46:44.067403 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 09:46:44.067419 kernel: CPU features: detected: CRC32 instructions Feb 9 09:46:44.067434 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:46:44.067450 kernel: alternatives: patching kernel code Feb 9 09:46:44.070528 kernel: devtmpfs: initialized Feb 9 09:46:44.070580 kernel: KASLR disabled due to lack of seed Feb 9 09:46:44.070598 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:46:44.070616 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:46:44.070644 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:46:44.070665 kernel: SMBIOS 3.0.0 present. Feb 9 09:46:44.070681 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 09:46:44.070698 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:46:44.070714 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:46:44.070731 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:46:44.070747 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:46:44.070764 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:46:44.070781 kernel: audit: type=2000 audit(0.254:1): state=initialized audit_enabled=0 res=1 Feb 9 09:46:44.070807 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:46:44.070824 kernel: cpuidle: using governor menu Feb 9 09:46:44.070841 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:46:44.070857 kernel: ASID allocator initialised with 32768 entries Feb 9 09:46:44.070873 kernel: ACPI: bus type PCI registered Feb 9 09:46:44.070894 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:46:44.070910 kernel: Serial: AMBA PL011 UART driver Feb 9 09:46:44.070927 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:46:44.070944 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:46:44.070960 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:46:44.070976 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:46:44.070992 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:46:44.071009 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:46:44.071025 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:46:44.071045 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:46:44.071062 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:46:44.071079 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:46:44.071095 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:46:44.071111 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:46:44.071127 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:46:44.071143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:46:44.071161 kernel: ACPI: Interpreter enabled Feb 9 09:46:44.071181 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:46:44.071204 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:46:44.071224 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 09:46:44.071566 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:46:44.071798 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:46:44.072019 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:46:44.072242 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 09:46:44.072436 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 09:46:44.072484 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 09:46:44.072508 kernel: acpiphp: Slot [1] registered Feb 9 09:46:44.072525 kernel: acpiphp: Slot [2] registered Feb 9 09:46:44.072541 kernel: acpiphp: Slot [3] registered Feb 9 09:46:44.072557 kernel: acpiphp: Slot [4] registered Feb 9 09:46:44.072574 kernel: acpiphp: Slot [5] registered Feb 9 09:46:44.072590 kernel: acpiphp: Slot [6] registered Feb 9 09:46:44.072607 kernel: acpiphp: Slot [7] registered Feb 9 09:46:44.072623 kernel: acpiphp: Slot [8] registered Feb 9 09:46:44.072644 kernel: acpiphp: Slot [9] registered Feb 9 09:46:44.072661 kernel: acpiphp: Slot [10] registered Feb 9 09:46:44.072677 kernel: acpiphp: Slot [11] registered Feb 9 09:46:44.072694 kernel: acpiphp: Slot [12] registered Feb 9 09:46:44.072710 kernel: acpiphp: Slot [13] registered Feb 9 09:46:44.072726 kernel: acpiphp: Slot [14] registered Feb 9 09:46:44.072742 kernel: acpiphp: Slot [15] registered Feb 9 09:46:44.072758 kernel: acpiphp: Slot [16] registered Feb 9 09:46:44.072774 kernel: acpiphp: Slot [17] registered Feb 9 09:46:44.072790 kernel: acpiphp: Slot [18] registered Feb 9 09:46:44.072810 kernel: acpiphp: Slot [19] registered Feb 9 09:46:44.072826 kernel: acpiphp: Slot [20] registered Feb 9 09:46:44.072842 kernel: acpiphp: Slot [21] registered Feb 9 09:46:44.072859 kernel: acpiphp: Slot [22] registered Feb 9 09:46:44.072875 kernel: acpiphp: Slot [23] registered Feb 9 09:46:44.072891 kernel: acpiphp: Slot [24] registered Feb 9 09:46:44.072907 kernel: acpiphp: Slot [25] registered Feb 9 09:46:44.072923 kernel: acpiphp: Slot [26] registered Feb 9 09:46:44.072939 kernel: acpiphp: Slot [27] registered Feb 9 09:46:44.072959 kernel: acpiphp: Slot [28] registered Feb 9 09:46:44.072976 kernel: acpiphp: Slot [29] registered Feb 9 09:46:44.072992 kernel: acpiphp: Slot [30] registered Feb 9 09:46:44.073008 kernel: acpiphp: Slot [31] registered Feb 9 09:46:44.073024 kernel: PCI host bridge to bus 0000:00 Feb 9 09:46:44.073250 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 09:46:44.073440 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:46:44.076635 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 09:46:44.076887 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 09:46:44.077123 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 09:46:44.077341 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 09:46:44.081233 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 09:46:44.090144 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 09:46:44.090374 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 09:46:44.090610 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:46:44.090836 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 09:46:44.091037 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 09:46:44.091236 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 09:46:44.091435 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 09:46:44.091658 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:46:44.091858 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 09:46:44.092065 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 09:46:44.092262 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 09:46:44.092461 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 09:46:44.092692 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 09:46:44.092873 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 09:46:44.093051 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:46:44.093228 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 09:46:44.093256 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:46:44.093273 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:46:44.093290 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:46:44.093306 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:46:44.093323 kernel: iommu: Default domain type: Translated Feb 9 09:46:44.093340 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:46:44.093356 kernel: vgaarb: loaded Feb 9 09:46:44.093373 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:46:44.093389 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:46:44.093410 kernel: PTP clock support registered Feb 9 09:46:44.093427 kernel: Registered efivars operations Feb 9 09:46:44.093443 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:46:44.093459 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:46:44.093504 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:46:44.093522 kernel: pnp: PnP ACPI init Feb 9 09:46:44.093732 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 09:46:44.093757 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:46:44.093775 kernel: NET: Registered PF_INET protocol family Feb 9 09:46:44.093797 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:46:44.093827 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:46:44.093849 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:46:44.093866 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:46:44.093882 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:46:44.093899 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:46:44.093915 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:46:44.093932 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:46:44.093948 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:46:44.093970 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:46:44.093986 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 09:46:44.094002 kernel: kvm [1]: HYP mode not available Feb 9 09:46:44.094018 kernel: Initialise system trusted keyrings Feb 9 09:46:44.094035 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:46:44.094051 kernel: Key type asymmetric registered Feb 9 09:46:44.094068 kernel: Asymmetric key parser 'x509' registered Feb 9 09:46:44.094085 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:46:44.094101 kernel: io scheduler mq-deadline registered Feb 9 09:46:44.094121 kernel: io scheduler kyber registered Feb 9 09:46:44.094138 kernel: io scheduler bfq registered Feb 9 09:46:44.094350 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 09:46:44.094375 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:46:44.094392 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:46:44.094408 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:46:44.094425 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 09:46:44.100427 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 09:46:44.100498 kernel: printk: console [ttyS0] disabled Feb 9 09:46:44.100518 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 09:46:44.100535 kernel: printk: console [ttyS0] enabled Feb 9 09:46:44.100552 kernel: printk: bootconsole [uart0] disabled Feb 9 09:46:44.100568 kernel: thunder_xcv, ver 1.0 Feb 9 09:46:44.100585 kernel: thunder_bgx, ver 1.0 Feb 9 09:46:44.100601 kernel: nicpf, ver 1.0 Feb 9 09:46:44.100617 kernel: nicvf, ver 1.0 Feb 9 09:46:44.100835 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:46:44.101031 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:46:43 UTC (1707472003) Feb 9 09:46:44.101054 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:46:44.101071 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:46:44.101087 kernel: Segment Routing with IPv6 Feb 9 09:46:44.101104 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:46:44.101120 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:46:44.101136 kernel: Key type dns_resolver registered Feb 9 09:46:44.101152 kernel: registered taskstats version 1 Feb 9 09:46:44.101173 kernel: Loading compiled-in X.509 certificates Feb 9 09:46:44.101190 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:46:44.101207 kernel: Key type .fscrypt registered Feb 9 09:46:44.101222 kernel: Key type fscrypt-provisioning registered Feb 9 09:46:44.101239 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:46:44.101256 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:46:44.101273 kernel: ima: No architecture policies found Feb 9 09:46:44.101289 kernel: Freeing unused kernel memory: 34688K Feb 9 09:46:44.101305 kernel: Run /init as init process Feb 9 09:46:44.101325 kernel: with arguments: Feb 9 09:46:44.101341 kernel: /init Feb 9 09:46:44.101357 kernel: with environment: Feb 9 09:46:44.101373 kernel: HOME=/ Feb 9 09:46:44.101389 kernel: TERM=linux Feb 9 09:46:44.101405 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:46:44.101426 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:46:44.101446 systemd[1]: Detected virtualization amazon. Feb 9 09:46:44.101488 systemd[1]: Detected architecture arm64. Feb 9 09:46:44.101509 systemd[1]: Running in initrd. Feb 9 09:46:44.101527 systemd[1]: No hostname configured, using default hostname. Feb 9 09:46:44.101544 systemd[1]: Hostname set to . Feb 9 09:46:44.101563 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:46:44.101580 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:46:44.101597 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:46:44.101614 systemd[1]: Reached target cryptsetup.target. Feb 9 09:46:44.101637 systemd[1]: Reached target paths.target. Feb 9 09:46:44.101655 systemd[1]: Reached target slices.target. Feb 9 09:46:44.101673 systemd[1]: Reached target swap.target. Feb 9 09:46:44.101690 systemd[1]: Reached target timers.target. Feb 9 09:46:44.101708 systemd[1]: Listening on iscsid.socket. Feb 9 09:46:44.101725 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:46:44.101743 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:46:44.101761 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:46:44.101783 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:46:44.101801 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:46:44.101839 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:46:44.101860 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:46:44.101878 systemd[1]: Reached target sockets.target. Feb 9 09:46:44.101896 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:46:44.101914 systemd[1]: Finished network-cleanup.service. Feb 9 09:46:44.101931 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:46:44.101949 systemd[1]: Starting systemd-journald.service... Feb 9 09:46:44.101974 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:46:44.101991 systemd[1]: Starting systemd-resolved.service... Feb 9 09:46:44.102009 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:46:44.102027 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:46:44.102046 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:46:44.102064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:46:44.109902 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:46:44.109938 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:46:44.109958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:46:44.109987 kernel: audit: type=1130 audit(1707472004.050:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.110008 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:46:44.110027 kernel: audit: type=1130 audit(1707472004.081:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.110045 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:46:44.110063 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:46:44.110080 kernel: Bridge firewalling registered Feb 9 09:46:44.110101 systemd-journald[308]: Journal started Feb 9 09:46:44.110210 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2d4a22969faa833375d6cdf348ced2) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:46:44.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.001651 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 09:46:44.070709 systemd-resolved[310]: Positive Trust Anchors: Feb 9 09:46:44.117972 systemd[1]: Started systemd-journald.service. Feb 9 09:46:44.070732 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:46:44.070785 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:46:44.110924 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 09:46:44.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.146493 kernel: audit: type=1130 audit(1707472004.133:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.146560 kernel: SCSI subsystem initialized Feb 9 09:46:44.147053 dracut-cmdline[326]: dracut-dracut-053 Feb 9 09:46:44.166651 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:46:44.166728 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:46:44.166753 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:46:44.185981 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:46:44.186424 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 09:46:44.190224 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:46:44.204994 kernel: audit: type=1130 audit(1707472004.191:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.193874 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:46:44.221031 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:46:44.231640 kernel: audit: type=1130 audit(1707472004.221:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.322494 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:46:44.334499 kernel: iscsi: registered transport (tcp) Feb 9 09:46:44.358572 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:46:44.358643 kernel: QLogic iSCSI HBA Driver Feb 9 09:46:44.546033 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 09:46:44.548735 kernel: random: crng init done Feb 9 09:46:44.548954 systemd[1]: Started systemd-resolved.service. Feb 9 09:46:44.561076 kernel: audit: type=1130 audit(1707472004.549:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.550783 systemd[1]: Reached target nss-lookup.target. Feb 9 09:46:44.574864 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:46:44.603599 kernel: audit: type=1130 audit(1707472004.573:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:44.597175 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:46:44.662508 kernel: raid6: neonx8 gen() 6341 MB/s Feb 9 09:46:44.680517 kernel: raid6: neonx8 xor() 4697 MB/s Feb 9 09:46:44.698517 kernel: raid6: neonx4 gen() 6377 MB/s Feb 9 09:46:44.716519 kernel: raid6: neonx4 xor() 4866 MB/s Feb 9 09:46:44.734531 kernel: raid6: neonx2 gen() 5605 MB/s Feb 9 09:46:44.752530 kernel: raid6: neonx2 xor() 4446 MB/s Feb 9 09:46:44.770534 kernel: raid6: neonx1 gen() 4372 MB/s Feb 9 09:46:44.788529 kernel: raid6: neonx1 xor() 3611 MB/s Feb 9 09:46:44.806528 kernel: raid6: int64x8 gen() 3375 MB/s Feb 9 09:46:44.824516 kernel: raid6: int64x8 xor() 2072 MB/s Feb 9 09:46:44.842514 kernel: raid6: int64x4 gen() 3766 MB/s Feb 9 09:46:44.860510 kernel: raid6: int64x4 xor() 2187 MB/s Feb 9 09:46:44.878518 kernel: raid6: int64x2 gen() 3546 MB/s Feb 9 09:46:44.896517 kernel: raid6: int64x2 xor() 1931 MB/s Feb 9 09:46:44.914525 kernel: raid6: int64x1 gen() 2740 MB/s Feb 9 09:46:44.934137 kernel: raid6: int64x1 xor() 1439 MB/s Feb 9 09:46:44.934203 kernel: raid6: using algorithm neonx4 gen() 6377 MB/s Feb 9 09:46:44.934228 kernel: raid6: .... xor() 4866 MB/s, rmw enabled Feb 9 09:46:44.936004 kernel: raid6: using neon recovery algorithm Feb 9 09:46:44.955524 kernel: xor: measuring software checksum speed Feb 9 09:46:44.958502 kernel: 8regs : 9360 MB/sec Feb 9 09:46:44.961511 kernel: 32regs : 11122 MB/sec Feb 9 09:46:44.961574 kernel: arm64_neon : 9648 MB/sec Feb 9 09:46:44.965074 kernel: xor: using function: 32regs (11122 MB/sec) Feb 9 09:46:45.058514 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:46:45.077725 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:46:45.094147 kernel: audit: type=1130 audit(1707472005.078:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.094185 kernel: audit: type=1334 audit(1707472005.078:10): prog-id=7 op=LOAD Feb 9 09:46:45.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.078000 audit: BPF prog-id=7 op=LOAD Feb 9 09:46:45.078000 audit: BPF prog-id=8 op=LOAD Feb 9 09:46:45.083170 systemd[1]: Starting systemd-udevd.service... Feb 9 09:46:45.122198 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 09:46:45.134226 systemd[1]: Started systemd-udevd.service. Feb 9 09:46:45.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.143642 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:46:45.177927 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Feb 9 09:46:45.244635 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:46:45.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.249154 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:46:45.359006 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:46:45.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:45.498908 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:46:45.498990 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 09:46:45.514660 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 09:46:45.515005 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 09:46:45.515032 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 09:46:45.515241 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 09:46:45.526844 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f4:ed:7b:04:0f Feb 9 09:46:45.527163 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 09:46:45.534153 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:46:45.534225 kernel: GPT:9289727 != 16777215 Feb 9 09:46:45.536538 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:46:45.537923 kernel: GPT:9289727 != 16777215 Feb 9 09:46:45.539913 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:46:45.541547 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:46:45.546665 (udev-worker)[560]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:46:45.622509 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (556) Feb 9 09:46:45.637977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:46:45.685079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:46:45.729381 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:46:45.734651 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:46:45.738124 systemd[1]: Starting disk-uuid.service... Feb 9 09:46:45.756369 disk-uuid[670]: Primary Header is updated. Feb 9 09:46:45.756369 disk-uuid[670]: Secondary Entries is updated. Feb 9 09:46:45.756369 disk-uuid[670]: Secondary Header is updated. Feb 9 09:46:45.771905 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:46:45.781525 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:46:46.779271 disk-uuid[671]: The operation has completed successfully. Feb 9 09:46:46.781707 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:46:46.945224 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:46:46.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.945458 systemd[1]: Finished disk-uuid.service. Feb 9 09:46:46.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:46.972787 systemd[1]: Starting verity-setup.service... Feb 9 09:46:47.012673 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:46:47.125995 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:46:47.130155 systemd[1]: Finished verity-setup.service. Feb 9 09:46:47.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.135186 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:46:47.222511 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:46:47.223134 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:46:47.226260 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:46:47.228689 systemd[1]: Starting ignition-setup.service... Feb 9 09:46:47.237535 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:46:47.262883 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:47.262954 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:46:47.265836 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:46:47.275509 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:46:47.296346 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:46:47.336320 systemd[1]: Finished ignition-setup.service. Feb 9 09:46:47.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.340357 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:46:47.406195 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:46:47.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.420000 audit: BPF prog-id=9 op=LOAD Feb 9 09:46:47.423337 systemd[1]: Starting systemd-networkd.service... Feb 9 09:46:47.472758 systemd-networkd[1182]: lo: Link UP Feb 9 09:46:47.473233 systemd-networkd[1182]: lo: Gained carrier Feb 9 09:46:47.476278 systemd-networkd[1182]: Enumeration completed Feb 9 09:46:47.476453 systemd[1]: Started systemd-networkd.service. Feb 9 09:46:47.478416 systemd-networkd[1182]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:46:47.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.489160 systemd[1]: Reached target network.target. Feb 9 09:46:47.490936 systemd-networkd[1182]: eth0: Link UP Feb 9 09:46:47.491493 systemd-networkd[1182]: eth0: Gained carrier Feb 9 09:46:47.505127 systemd[1]: Starting iscsiuio.service... Feb 9 09:46:47.518363 systemd[1]: Started iscsiuio.service. Feb 9 09:46:47.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.523623 systemd[1]: Starting iscsid.service... Feb 9 09:46:47.525452 systemd-networkd[1182]: eth0: DHCPv4 address 172.31.16.94/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:46:47.535818 iscsid[1187]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:46:47.535818 iscsid[1187]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:46:47.535818 iscsid[1187]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:46:47.535818 iscsid[1187]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:46:47.535818 iscsid[1187]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:46:47.556519 iscsid[1187]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:46:47.562172 systemd[1]: Started iscsid.service. Feb 9 09:46:47.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.578305 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:46:47.604752 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:46:47.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:47.608139 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:46:47.611562 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:46:47.615088 systemd[1]: Reached target remote-fs.target. Feb 9 09:46:47.620447 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:46:47.643182 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:46:47.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.147017 ignition[1132]: Ignition 2.14.0 Feb 9 09:46:48.147049 ignition[1132]: Stage: fetch-offline Feb 9 09:46:48.147387 ignition[1132]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:48.147448 ignition[1132]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:48.168491 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:48.170019 ignition[1132]: Ignition finished successfully Feb 9 09:46:48.173656 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:46:48.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.185500 kernel: kauditd_printk_skb: 15 callbacks suppressed Feb 9 09:46:48.185585 kernel: audit: type=1130 audit(1707472008.174:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.179504 systemd[1]: Starting ignition-fetch.service... Feb 9 09:46:48.197536 ignition[1206]: Ignition 2.14.0 Feb 9 09:46:48.199215 ignition[1206]: Stage: fetch Feb 9 09:46:48.200705 ignition[1206]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:48.202950 ignition[1206]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:48.213861 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:48.216155 ignition[1206]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:48.230684 ignition[1206]: INFO : PUT result: OK Feb 9 09:46:48.235076 ignition[1206]: DEBUG : parsed url from cmdline: "" Feb 9 09:46:48.236844 ignition[1206]: INFO : no config URL provided Feb 9 09:46:48.236844 ignition[1206]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:46:48.236844 ignition[1206]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 09:46:48.236844 ignition[1206]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:48.245483 ignition[1206]: INFO : PUT result: OK Feb 9 09:46:48.247280 ignition[1206]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 09:46:48.250332 ignition[1206]: INFO : GET result: OK Feb 9 09:46:48.251840 ignition[1206]: DEBUG : parsing config with SHA512: e528961325f63e8145acde95d496dfe285998a2d199dbb513d9764a7293b760e14f07bff15dafeec6465737c1ac56057e9e0323fcebb2f7b5d3d04d9a7c8ae93 Feb 9 09:46:48.283570 unknown[1206]: fetched base config from "system" Feb 9 09:46:48.283599 unknown[1206]: fetched base config from "system" Feb 9 09:46:48.283614 unknown[1206]: fetched user config from "aws" Feb 9 09:46:48.289079 ignition[1206]: fetch: fetch complete Feb 9 09:46:48.289106 ignition[1206]: fetch: fetch passed Feb 9 09:46:48.289198 ignition[1206]: Ignition finished successfully Feb 9 09:46:48.295653 systemd[1]: Finished ignition-fetch.service. Feb 9 09:46:48.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.300191 systemd[1]: Starting ignition-kargs.service... Feb 9 09:46:48.311622 kernel: audit: type=1130 audit(1707472008.297:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.323284 ignition[1212]: Ignition 2.14.0 Feb 9 09:46:48.323313 ignition[1212]: Stage: kargs Feb 9 09:46:48.323643 ignition[1212]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:48.323701 ignition[1212]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:48.338074 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:48.340712 ignition[1212]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:48.343984 ignition[1212]: INFO : PUT result: OK Feb 9 09:46:48.349368 ignition[1212]: kargs: kargs passed Feb 9 09:46:48.350983 ignition[1212]: Ignition finished successfully Feb 9 09:46:48.354257 systemd[1]: Finished ignition-kargs.service. Feb 9 09:46:48.365627 kernel: audit: type=1130 audit(1707472008.354:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.357489 systemd[1]: Starting ignition-disks.service... Feb 9 09:46:48.372880 ignition[1218]: Ignition 2.14.0 Feb 9 09:46:48.372895 ignition[1218]: Stage: disks Feb 9 09:46:48.373189 ignition[1218]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:48.373246 ignition[1218]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:48.391373 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:48.393922 ignition[1218]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:48.397264 ignition[1218]: INFO : PUT result: OK Feb 9 09:46:48.402631 ignition[1218]: disks: disks passed Feb 9 09:46:48.404160 ignition[1218]: Ignition finished successfully Feb 9 09:46:48.407371 systemd[1]: Finished ignition-disks.service. Feb 9 09:46:48.419297 kernel: audit: type=1130 audit(1707472008.406:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.410489 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:46:48.419334 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:46:48.421066 systemd[1]: Reached target local-fs.target. Feb 9 09:46:48.427324 systemd[1]: Reached target sysinit.target. Feb 9 09:46:48.430243 systemd[1]: Reached target basic.target. Feb 9 09:46:48.433401 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:46:48.481986 systemd-fsck[1226]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:46:48.489451 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:46:48.502738 kernel: audit: type=1130 audit(1707472008.490:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.493005 systemd[1]: Mounting sysroot.mount... Feb 9 09:46:48.520729 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:46:48.521155 systemd[1]: Mounted sysroot.mount. Feb 9 09:46:48.523015 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:46:48.529249 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:46:48.533026 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:46:48.533414 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:46:48.533518 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:46:48.545508 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:46:48.560184 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:46:48.564796 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:46:48.586262 initrd-setup-root[1248]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:46:48.598147 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1243) Feb 9 09:46:48.598184 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:48.598208 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:46:48.598231 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:46:48.606500 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:46:48.610276 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:46:48.616439 initrd-setup-root[1274]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:46:48.625272 initrd-setup-root[1282]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:46:48.634197 initrd-setup-root[1290]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:46:48.836291 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:46:48.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.841043 systemd[1]: Starting ignition-mount.service... Feb 9 09:46:48.850960 kernel: audit: type=1130 audit(1707472008.838:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.851628 systemd[1]: Starting sysroot-boot.service... Feb 9 09:46:48.864007 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:46:48.864222 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:46:48.899235 ignition[1309]: INFO : Ignition 2.14.0 Feb 9 09:46:48.899235 ignition[1309]: INFO : Stage: mount Feb 9 09:46:48.899235 ignition[1309]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:48.899235 ignition[1309]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:48.908434 systemd[1]: Finished sysroot-boot.service. Feb 9 09:46:48.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.924514 kernel: audit: type=1130 audit(1707472008.915:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.928410 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:48.931144 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:48.934382 ignition[1309]: INFO : PUT result: OK Feb 9 09:46:48.949511 ignition[1309]: INFO : mount: mount passed Feb 9 09:46:48.951134 ignition[1309]: INFO : Ignition finished successfully Feb 9 09:46:48.954532 systemd[1]: Finished ignition-mount.service. Feb 9 09:46:48.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.959058 systemd[1]: Starting ignition-files.service... Feb 9 09:46:48.968505 kernel: audit: type=1130 audit(1707472008.956:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:48.975744 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:46:48.993539 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1318) Feb 9 09:46:49.000201 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:46:49.000273 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:46:49.000298 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:46:49.009512 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:46:49.014057 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:46:49.032884 ignition[1337]: INFO : Ignition 2.14.0 Feb 9 09:46:49.032884 ignition[1337]: INFO : Stage: files Feb 9 09:46:49.036401 ignition[1337]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:49.036401 ignition[1337]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:49.036429 systemd-networkd[1182]: eth0: Gained IPv6LL Feb 9 09:46:49.052460 ignition[1337]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:49.055356 ignition[1337]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:49.058674 ignition[1337]: INFO : PUT result: OK Feb 9 09:46:49.064645 ignition[1337]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:46:49.067730 ignition[1337]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:46:49.067730 ignition[1337]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:46:49.111150 ignition[1337]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:46:49.114252 ignition[1337]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:46:49.118598 unknown[1337]: wrote ssh authorized keys file for user: core Feb 9 09:46:49.120914 ignition[1337]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:46:49.124501 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:46:49.128440 ignition[1337]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:46:49.591550 ignition[1337]: INFO : GET result: OK Feb 9 09:46:50.033601 ignition[1337]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:46:50.038499 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:46:50.038499 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:46:50.038499 ignition[1337]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:46:50.441846 ignition[1337]: INFO : GET result: OK Feb 9 09:46:50.686989 ignition[1337]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:46:50.692000 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:46:50.692000 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:46:50.692000 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:50.710195 ignition[1337]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3698888856" Feb 9 09:46:50.716895 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1340) Feb 9 09:46:50.716935 ignition[1337]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3698888856": device or resource busy Feb 9 09:46:50.716935 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3698888856", trying btrfs: device or resource busy Feb 9 09:46:50.716935 ignition[1337]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3698888856" Feb 9 09:46:50.726529 ignition[1337]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3698888856" Feb 9 09:46:50.726529 ignition[1337]: INFO : op(3): [started] unmounting "/mnt/oem3698888856" Feb 9 09:46:50.731717 ignition[1337]: INFO : op(3): [finished] unmounting "/mnt/oem3698888856" Feb 9 09:46:50.731717 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:46:50.737455 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:46:50.737455 ignition[1337]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:46:50.910764 ignition[1337]: INFO : GET result: OK Feb 9 09:46:51.512217 ignition[1337]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:46:51.517572 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:46:51.517572 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:46:51.517572 ignition[1337]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:46:51.576351 ignition[1337]: INFO : GET result: OK Feb 9 09:46:53.090036 ignition[1337]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:46:53.095061 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:46:53.095061 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:46:53.095061 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:46:53.095061 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:46:53.108980 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:46:53.117997 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:46:53.121763 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:46:53.125301 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:46:53.129372 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:53.141220 ignition[1337]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3042388788" Feb 9 09:46:53.141220 ignition[1337]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3042388788": device or resource busy Feb 9 09:46:53.141220 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3042388788", trying btrfs: device or resource busy Feb 9 09:46:53.141220 ignition[1337]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3042388788" Feb 9 09:46:53.154023 ignition[1337]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3042388788" Feb 9 09:46:53.154023 ignition[1337]: INFO : op(6): [started] unmounting "/mnt/oem3042388788" Feb 9 09:46:53.160780 ignition[1337]: INFO : op(6): [finished] unmounting "/mnt/oem3042388788" Feb 9 09:46:53.163092 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:46:53.163092 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:46:53.170520 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:53.183556 ignition[1337]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836614192" Feb 9 09:46:53.183556 ignition[1337]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836614192": device or resource busy Feb 9 09:46:53.183556 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3836614192", trying btrfs: device or resource busy Feb 9 09:46:53.183556 ignition[1337]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836614192" Feb 9 09:46:53.183556 ignition[1337]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3836614192" Feb 9 09:46:53.183556 ignition[1337]: INFO : op(9): [started] unmounting "/mnt/oem3836614192" Feb 9 09:46:53.183556 ignition[1337]: INFO : op(9): [finished] unmounting "/mnt/oem3836614192" Feb 9 09:46:53.183556 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:46:53.206587 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:46:53.206587 ignition[1337]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:46:53.227223 ignition[1337]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1533512206" Feb 9 09:46:53.230221 ignition[1337]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1533512206": device or resource busy Feb 9 09:46:53.230221 ignition[1337]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1533512206", trying btrfs: device or resource busy Feb 9 09:46:53.230221 ignition[1337]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1533512206" Feb 9 09:46:53.230221 ignition[1337]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1533512206" Feb 9 09:46:53.245178 ignition[1337]: INFO : op(c): [started] unmounting "/mnt/oem1533512206" Feb 9 09:46:53.245178 ignition[1337]: INFO : op(c): [finished] unmounting "/mnt/oem1533512206" Feb 9 09:46:53.250038 ignition[1337]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:46:53.253846 ignition[1337]: INFO : files: op(e): [started] processing unit "nvidia.service" Feb 9 09:46:53.253846 ignition[1337]: INFO : files: op(e): [finished] processing unit "nvidia.service" Feb 9 09:46:53.253846 ignition[1337]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:46:53.253846 ignition[1337]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:46:53.253846 ignition[1337]: INFO : files: op(10): [started] processing unit "amazon-ssm-agent.service" Feb 9 09:46:53.253846 ignition[1337]: INFO : files: op(10): op(11): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:46:53.271272 ignition[1337]: INFO : files: op(10): op(11): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:46:53.271272 ignition[1337]: INFO : files: op(10): [finished] processing unit "amazon-ssm-agent.service" Feb 9 09:46:53.271272 ignition[1337]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:46:53.280806 ignition[1337]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:46:53.290139 ignition[1337]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:46:53.370040 kernel: audit: type=1130 audit(1707472013.308:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.309414 systemd[1]: Finished ignition-files.service. Feb 9 09:46:53.373252 ignition[1337]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:46:53.373252 ignition[1337]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:46:53.373252 ignition[1337]: INFO : files: files passed Feb 9 09:46:53.373252 ignition[1337]: INFO : Ignition finished successfully Feb 9 09:46:53.330976 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:46:53.381378 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:46:53.385526 systemd[1]: Starting ignition-quench.service... Feb 9 09:46:53.403309 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:46:53.403794 systemd[1]: Finished ignition-quench.service. Feb 9 09:46:53.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.424844 kernel: audit: type=1130 audit(1707472013.404:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.424919 kernel: audit: type=1131 audit(1707472013.404:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.427819 initrd-setup-root-after-ignition[1362]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:46:53.432304 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:46:53.448688 kernel: audit: type=1130 audit(1707472013.430:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.434806 systemd[1]: Reached target ignition-complete.target. Feb 9 09:46:53.449190 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:46:53.486264 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:46:53.488278 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:46:53.498875 kernel: audit: type=1130 audit(1707472013.488:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.499058 systemd[1]: Reached target initrd-fs.target. Feb 9 09:46:53.507552 kernel: audit: type=1131 audit(1707472013.497:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.509546 systemd[1]: Reached target initrd.target. Feb 9 09:46:53.512751 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:46:53.517002 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:46:53.542372 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:46:53.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.549655 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:46:53.558642 kernel: audit: type=1130 audit(1707472013.546:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.572660 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:46:53.576066 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:46:53.579814 systemd[1]: Stopped target timers.target. Feb 9 09:46:53.582963 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:46:53.585164 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:46:53.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.588757 systemd[1]: Stopped target initrd.target. Feb 9 09:46:53.598877 kernel: audit: type=1131 audit(1707472013.587:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.599111 systemd[1]: Stopped target basic.target. Feb 9 09:46:53.637254 kernel: audit: type=1131 audit(1707472013.603:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.637311 kernel: audit: type=1131 audit(1707472013.603:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.599601 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:46:53.600162 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:46:53.600558 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:46:53.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.659427 iscsid[1187]: iscsid shutting down. Feb 9 09:46:53.600810 systemd[1]: Stopped target remote-fs.target. Feb 9 09:46:53.601108 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:46:53.601441 systemd[1]: Stopped target sysinit.target. Feb 9 09:46:53.602102 systemd[1]: Stopped target local-fs.target. Feb 9 09:46:53.602396 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:46:53.603148 systemd[1]: Stopped target swap.target. Feb 9 09:46:53.603998 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:46:53.604317 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:46:53.612298 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:46:53.613146 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:46:53.613452 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:46:53.692946 ignition[1375]: INFO : Ignition 2.14.0 Feb 9 09:46:53.692946 ignition[1375]: INFO : Stage: umount Feb 9 09:46:53.692946 ignition[1375]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:46:53.692946 ignition[1375]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:46:53.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.614261 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:46:53.614893 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:46:53.615840 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:46:53.616158 systemd[1]: Stopped ignition-files.service. Feb 9 09:46:53.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.636124 systemd[1]: Stopping ignition-mount.service... Feb 9 09:46:53.643225 systemd[1]: Stopping iscsid.service... Feb 9 09:46:53.647124 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:46:53.647593 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:46:53.665832 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:46:53.684824 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:46:53.685293 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:46:53.695408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:46:53.697866 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:46:53.724347 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:46:53.724581 systemd[1]: Stopped iscsid.service. Feb 9 09:46:53.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.747341 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:46:53.749908 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:46:53.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.761488 systemd[1]: Stopping iscsiuio.service... Feb 9 09:46:53.767628 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:46:53.769906 systemd[1]: Stopped iscsiuio.service. Feb 9 09:46:53.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.778033 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:46:53.782269 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:46:53.784875 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:46:53.789687 ignition[1375]: INFO : PUT result: OK Feb 9 09:46:53.796517 ignition[1375]: INFO : umount: umount passed Feb 9 09:46:53.798961 ignition[1375]: INFO : Ignition finished successfully Feb 9 09:46:53.802052 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:46:53.802403 systemd[1]: Stopped ignition-mount.service. Feb 9 09:46:53.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.805952 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:46:53.806062 systemd[1]: Stopped ignition-disks.service. Feb 9 09:46:53.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.808380 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:46:53.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.808984 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:46:53.811072 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:46:53.811258 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:46:53.814942 systemd[1]: Stopped target network.target. Feb 9 09:46:53.817744 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:46:53.817949 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:46:53.820441 systemd[1]: Stopped target paths.target. Feb 9 09:46:53.823283 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:46:53.828747 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:46:53.836278 systemd[1]: Stopped target slices.target. Feb 9 09:46:53.846652 systemd[1]: Stopped target sockets.target. Feb 9 09:46:53.852281 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:46:53.852403 systemd[1]: Closed iscsid.socket. Feb 9 09:46:53.856573 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:46:53.856679 systemd[1]: Closed iscsiuio.socket. Feb 9 09:46:53.861161 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:46:53.861304 systemd[1]: Stopped ignition-setup.service. Feb 9 09:46:53.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.866751 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:46:53.870079 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:46:53.873658 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:46:53.873888 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:46:53.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.874547 systemd-networkd[1182]: eth0: DHCPv6 lease lost Feb 9 09:46:53.877582 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:46:53.877687 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:46:53.886767 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:46:53.888740 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:46:53.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.892674 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:46:53.894754 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:46:53.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.896000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:46:53.898350 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:46:53.898458 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:46:53.900000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:46:53.905054 systemd[1]: Stopping network-cleanup.service... Feb 9 09:46:53.909635 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:46:53.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.909769 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:46:53.915488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:46:53.915611 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:46:53.917715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:46:53.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.917836 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:46:53.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.921690 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:46:53.926572 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:46:53.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.934448 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:46:53.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:53.935841 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:46:53.939200 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:46:53.951954 systemd[1]: Stopped network-cleanup.service. Feb 9 09:46:53.955033 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:46:53.955129 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:46:53.957190 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:46:53.957390 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:46:53.960401 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:46:53.960528 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:46:53.962272 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:46:53.962369 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:46:53.964083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:46:53.964180 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:46:53.967253 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:46:53.974639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:46:53.974771 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:46:53.984399 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:46:53.984638 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:46:53.990257 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:46:54.025024 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:46:54.040974 systemd[1]: Switching root. Feb 9 09:46:54.067910 systemd-journald[308]: Journal stopped Feb 9 09:47:00.199046 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Feb 9 09:47:00.199171 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:47:00.199213 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:47:00.199250 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:47:00.199282 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:47:00.199313 kernel: SELinux: policy capability open_perms=1 Feb 9 09:47:00.199351 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:47:00.199388 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:47:00.199419 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:47:00.199451 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:47:00.199498 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:47:00.199531 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:47:00.199568 systemd[1]: Successfully loaded SELinux policy in 109.151ms. Feb 9 09:47:00.199617 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.547ms. Feb 9 09:47:00.199668 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:47:00.199705 systemd[1]: Detected virtualization amazon. Feb 9 09:47:00.199737 systemd[1]: Detected architecture arm64. Feb 9 09:47:00.199769 systemd[1]: Detected first boot. Feb 9 09:47:00.199801 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:47:00.199836 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:47:00.199867 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:47:00.199900 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:00.199934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:00.199968 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:00.199998 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 09:47:00.200029 kernel: audit: type=1334 audit(1707472019.811:85): prog-id=12 op=LOAD Feb 9 09:47:00.200058 kernel: audit: type=1334 audit(1707472019.811:86): prog-id=3 op=UNLOAD Feb 9 09:47:00.200089 kernel: audit: type=1334 audit(1707472019.811:87): prog-id=13 op=LOAD Feb 9 09:47:00.200118 kernel: audit: type=1334 audit(1707472019.811:88): prog-id=14 op=LOAD Feb 9 09:47:00.200147 kernel: audit: type=1334 audit(1707472019.811:89): prog-id=4 op=UNLOAD Feb 9 09:47:00.200176 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:47:00.200206 kernel: audit: type=1334 audit(1707472019.811:90): prog-id=5 op=UNLOAD Feb 9 09:47:00.200236 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:47:00.200267 kernel: audit: type=1334 audit(1707472019.817:91): prog-id=15 op=LOAD Feb 9 09:47:00.200297 kernel: audit: type=1334 audit(1707472019.817:92): prog-id=12 op=UNLOAD Feb 9 09:47:00.200327 kernel: audit: type=1334 audit(1707472019.820:93): prog-id=16 op=LOAD Feb 9 09:47:00.200362 kernel: audit: type=1334 audit(1707472019.822:94): prog-id=17 op=LOAD Feb 9 09:47:00.200391 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:47:00.200422 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:47:00.200454 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:47:00.200504 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:47:00.200539 systemd[1]: Created slice system-getty.slice. Feb 9 09:47:00.200570 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:47:00.200604 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:47:00.200640 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:47:00.200672 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:47:00.200702 systemd[1]: Created slice user.slice. Feb 9 09:47:00.200732 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:47:00.200761 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:47:00.200790 systemd[1]: Set up automount boot.automount. Feb 9 09:47:00.200821 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:47:00.200853 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:47:00.200883 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:47:00.201658 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:47:00.201714 systemd[1]: Reached target integritysetup.target. Feb 9 09:47:00.201749 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:47:00.201802 systemd[1]: Reached target remote-fs.target. Feb 9 09:47:00.201833 systemd[1]: Reached target slices.target. Feb 9 09:47:00.201864 systemd[1]: Reached target swap.target. Feb 9 09:47:00.201894 systemd[1]: Reached target torcx.target. Feb 9 09:47:00.201926 systemd[1]: Reached target veritysetup.target. Feb 9 09:47:00.201973 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:47:00.202012 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:47:00.202043 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:47:00.202075 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:47:00.202105 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:47:00.202134 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:47:00.202164 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:47:00.202194 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:47:00.202226 systemd[1]: Mounting media.mount... Feb 9 09:47:00.202256 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:47:00.202290 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:47:00.202322 systemd[1]: Mounting tmp.mount... Feb 9 09:47:00.202351 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:47:00.202382 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:47:00.202411 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:47:00.202440 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:47:00.202489 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:47:00.202522 systemd[1]: Starting modprobe@drm.service... Feb 9 09:47:00.202552 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:47:00.202584 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:47:00.202619 systemd[1]: Starting modprobe@loop.service... Feb 9 09:47:00.202651 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:47:00.202681 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:47:00.202711 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:47:00.202743 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:47:00.202774 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:47:00.202804 systemd[1]: Stopped systemd-journald.service. Feb 9 09:47:00.202833 kernel: fuse: init (API version 7.34) Feb 9 09:47:00.202868 systemd[1]: Starting systemd-journald.service... Feb 9 09:47:00.202900 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:47:00.202930 kernel: loop: module loaded Feb 9 09:47:00.202958 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:47:00.202988 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:47:00.203019 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:47:00.203050 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:47:00.203089 systemd[1]: Stopped verity-setup.service. Feb 9 09:47:00.203121 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:47:00.203150 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:47:00.203183 systemd[1]: Mounted media.mount. Feb 9 09:47:00.203213 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:47:00.203242 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:47:00.203273 systemd[1]: Mounted tmp.mount. Feb 9 09:47:00.203305 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:47:00.203337 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:47:00.203367 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:47:00.203403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:47:00.203438 systemd-journald[1483]: Journal started Feb 9 09:47:00.203612 systemd-journald[1483]: Runtime Journal (/run/log/journal/ec2d4a22969faa833375d6cdf348ced2) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:46:55.003000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:46:55.209000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:46:55.209000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:46:55.209000 audit: BPF prog-id=10 op=LOAD Feb 9 09:46:55.209000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:46:55.209000 audit: BPF prog-id=11 op=LOAD Feb 9 09:46:55.209000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:46:55.544000 audit[1408]: AVC avc: denied { associate } for pid=1408 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:46:55.544000 audit[1408]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022802 a1=4000028ae0 a2=4000026d00 a3=32 items=0 ppid=1391 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:55.544000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:46:55.548000 audit[1408]: AVC avc: denied { associate } for pid=1408 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:46:55.548000 audit[1408]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228d9 a2=1ed a3=0 items=2 ppid=1391 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:46:55.548000 audit: CWD cwd="/" Feb 9 09:46:55.548000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:46:55.548000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:46:55.548000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:46:59.811000 audit: BPF prog-id=12 op=LOAD Feb 9 09:46:59.811000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:46:59.811000 audit: BPF prog-id=13 op=LOAD Feb 9 09:46:59.811000 audit: BPF prog-id=14 op=LOAD Feb 9 09:46:59.811000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:46:59.811000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:46:59.817000 audit: BPF prog-id=15 op=LOAD Feb 9 09:46:59.817000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:46:59.820000 audit: BPF prog-id=16 op=LOAD Feb 9 09:46:59.822000 audit: BPF prog-id=17 op=LOAD Feb 9 09:46:59.822000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:46:59.822000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:46:59.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:59.839000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:46:59.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:59.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.097000 audit: BPF prog-id=18 op=LOAD Feb 9 09:47:00.097000 audit: BPF prog-id=19 op=LOAD Feb 9 09:47:00.098000 audit: BPF prog-id=20 op=LOAD Feb 9 09:47:00.098000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:47:00.098000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:47:00.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.188000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:47:00.188000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffff27827e0 a2=4000 a3=1 items=0 ppid=1 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:00.188000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:47:00.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:55.521829 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:00.219900 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:47:00.219953 systemd[1]: Started systemd-journald.service. Feb 9 09:47:00.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:59.810527 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:47:00.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:55.543017 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:46:59.825124 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:46:55.543075 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:47:00.216320 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:46:55.543148 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:47:00.216661 systemd[1]: Finished modprobe@drm.service. Feb 9 09:46:55.543178 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:47:00.219499 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:46:55.543258 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:47:00.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.220849 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:46:55.543290 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:47:00.223319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:46:55.543773 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:47:00.224712 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:46:55.543862 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:47:00.227261 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:46:55.543900 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:47:00.227584 systemd[1]: Finished modprobe@loop.service. Feb 9 09:46:55.544798 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:47:00.230525 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:46:55.544894 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:46:55.544944 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:46:55.544987 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:47:00.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:46:55.545037 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:46:55.545077 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:46:58.930666 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:58Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:46:58.931195 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:58Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:46:58.931431 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:58Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:46:58.931907 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:58Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:47:00.233701 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:46:58.932014 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:58Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:46:58.932151 /usr/lib/systemd/system-generators/torcx-generator[1408]: time="2024-02-09T09:46:58Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:47:00.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.236866 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:47:00.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.239724 systemd[1]: Reached target network-pre.target. Feb 9 09:47:00.244296 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:47:00.248387 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:47:00.251716 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:47:00.254935 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:47:00.259033 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:47:00.260812 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:47:00.264373 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:47:00.266736 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:47:00.269023 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:00.276462 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:47:00.279609 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:47:00.298204 systemd-journald[1483]: Time spent on flushing to /var/log/journal/ec2d4a22969faa833375d6cdf348ced2 is 91.302ms for 1143 entries. Feb 9 09:47:00.298204 systemd-journald[1483]: System Journal (/var/log/journal/ec2d4a22969faa833375d6cdf348ced2) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:47:00.429313 systemd-journald[1483]: Received client request to flush runtime journal. Feb 9 09:47:00.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.323898 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:47:00.326117 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:47:00.351846 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:00.385894 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:47:00.390195 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:47:00.431312 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:47:00.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.463881 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:47:00.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.468372 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:47:00.485712 udevadm[1528]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:47:00.532048 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:47:00.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.276000 audit: BPF prog-id=21 op=LOAD Feb 9 09:47:01.276000 audit: BPF prog-id=22 op=LOAD Feb 9 09:47:01.276000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:47:01.276000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:47:01.274133 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:47:01.279301 systemd[1]: Starting systemd-udevd.service... Feb 9 09:47:01.318923 systemd-udevd[1529]: Using default interface naming scheme 'v252'. Feb 9 09:47:01.356776 systemd[1]: Started systemd-udevd.service. Feb 9 09:47:01.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.359000 audit: BPF prog-id=23 op=LOAD Feb 9 09:47:01.361981 systemd[1]: Starting systemd-networkd.service... Feb 9 09:47:01.369000 audit: BPF prog-id=24 op=LOAD Feb 9 09:47:01.369000 audit: BPF prog-id=25 op=LOAD Feb 9 09:47:01.369000 audit: BPF prog-id=26 op=LOAD Feb 9 09:47:01.372264 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:47:01.438886 systemd[1]: Started systemd-userdbd.service. Feb 9 09:47:01.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.493931 (udev-worker)[1545]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:01.582663 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 09:47:01.617689 systemd-networkd[1533]: lo: Link UP Feb 9 09:47:01.617713 systemd-networkd[1533]: lo: Gained carrier Feb 9 09:47:01.618730 systemd-networkd[1533]: Enumeration completed Feb 9 09:47:01.618907 systemd[1]: Started systemd-networkd.service. Feb 9 09:47:01.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.623215 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:47:01.627785 systemd-networkd[1533]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:47:01.633525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:47:01.633908 systemd-networkd[1533]: eth0: Link UP Feb 9 09:47:01.634218 systemd-networkd[1533]: eth0: Gained carrier Feb 9 09:47:01.643745 systemd-networkd[1533]: eth0: DHCPv4 address 172.31.16.94/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:47:01.773553 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1551) Feb 9 09:47:01.883569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:47:01.886957 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:47:01.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.891537 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:47:01.958508 lvm[1648]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:47:01.998282 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:47:02.000518 systemd[1]: Reached target cryptsetup.target. Feb 9 09:47:01.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.004858 systemd[1]: Starting lvm2-activation.service... Feb 9 09:47:02.013824 lvm[1649]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:47:02.052630 systemd[1]: Finished lvm2-activation.service. Feb 9 09:47:02.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.054900 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:47:02.056811 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:47:02.056884 systemd[1]: Reached target local-fs.target. Feb 9 09:47:02.058717 systemd[1]: Reached target machines.target. Feb 9 09:47:02.063250 systemd[1]: Starting ldconfig.service... Feb 9 09:47:02.065889 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:47:02.066087 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:47:02.071401 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:47:02.075655 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:47:02.086838 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:47:02.089964 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:47:02.090151 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:47:02.092748 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:47:02.115323 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1651 (bootctl) Feb 9 09:47:02.117881 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:47:02.142670 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:47:02.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.166503 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:47:02.193985 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:47:02.217208 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:47:02.232849 systemd-fsck[1661]: fsck.fat 4.2 (2021-01-31) Feb 9 09:47:02.232849 systemd-fsck[1661]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 09:47:02.238818 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:47:02.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.243955 systemd[1]: Mounting boot.mount... Feb 9 09:47:02.274077 systemd[1]: Mounted boot.mount. Feb 9 09:47:02.298429 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:47:02.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.553538 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:47:02.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.558443 systemd[1]: Starting audit-rules.service... Feb 9 09:47:02.562686 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:47:02.569458 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:47:02.572000 audit: BPF prog-id=27 op=LOAD Feb 9 09:47:02.576315 systemd[1]: Starting systemd-resolved.service... Feb 9 09:47:02.579000 audit: BPF prog-id=28 op=LOAD Feb 9 09:47:02.585017 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:47:02.591440 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:47:02.612618 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:47:02.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.614852 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:47:02.619000 audit[1681]: SYSTEM_BOOT pid=1681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.644524 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:47:02.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.837019 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:47:02.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.839270 systemd[1]: Reached target time-set.target. Feb 9 09:47:02.859692 systemd-networkd[1533]: eth0: Gained IPv6LL Feb 9 09:47:02.862966 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:47:02.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.896769 systemd-resolved[1679]: Positive Trust Anchors: Feb 9 09:47:02.896792 systemd-resolved[1679]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:47:02.896842 systemd-resolved[1679]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:47:02.931299 systemd-timesyncd[1680]: Contacted time server 23.131.160.7:123 (0.flatcar.pool.ntp.org). Feb 9 09:47:02.932209 systemd-timesyncd[1680]: Initial clock synchronization to Fri 2024-02-09 09:47:03.033752 UTC. Feb 9 09:47:02.950411 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:47:02.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:03.119348 augenrules[1699]: No rules Feb 9 09:47:03.118000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:47:03.118000 audit[1699]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc521d320 a2=420 a3=0 items=0 ppid=1675 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:03.118000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:47:03.122816 systemd[1]: Finished audit-rules.service. Feb 9 09:47:03.212868 systemd-resolved[1679]: Defaulting to hostname 'linux'. Feb 9 09:47:03.216060 systemd[1]: Started systemd-resolved.service. Feb 9 09:47:03.218162 systemd[1]: Reached target network.target. Feb 9 09:47:03.219968 systemd[1]: Reached target network-online.target. Feb 9 09:47:03.221873 systemd[1]: Reached target nss-lookup.target. Feb 9 09:47:03.351158 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:47:03.352337 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:47:03.592501 ldconfig[1650]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:47:03.598678 systemd[1]: Finished ldconfig.service. Feb 9 09:47:03.604634 systemd[1]: Starting systemd-update-done.service... Feb 9 09:47:03.618725 systemd[1]: Finished systemd-update-done.service. Feb 9 09:47:03.621865 systemd[1]: Reached target sysinit.target. Feb 9 09:47:03.623832 systemd[1]: Started motdgen.path. Feb 9 09:47:03.625620 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:47:03.628153 systemd[1]: Started logrotate.timer. Feb 9 09:47:03.629870 systemd[1]: Started mdadm.timer. Feb 9 09:47:03.631339 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:47:03.633163 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:47:03.633229 systemd[1]: Reached target paths.target. Feb 9 09:47:03.634778 systemd[1]: Reached target timers.target. Feb 9 09:47:03.636899 systemd[1]: Listening on dbus.socket. Feb 9 09:47:03.641010 systemd[1]: Starting docker.socket... Feb 9 09:47:03.647687 systemd[1]: Listening on sshd.socket. Feb 9 09:47:03.649721 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:47:03.650751 systemd[1]: Listening on docker.socket. Feb 9 09:47:03.652756 systemd[1]: Reached target sockets.target. Feb 9 09:47:03.654657 systemd[1]: Reached target basic.target. Feb 9 09:47:03.656364 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:47:03.656572 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:47:03.670631 systemd[1]: Started amazon-ssm-agent.service. Feb 9 09:47:03.676723 systemd[1]: Starting containerd.service... Feb 9 09:47:03.681103 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:47:03.688717 systemd[1]: Starting dbus.service... Feb 9 09:47:03.699203 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:47:03.705871 systemd[1]: Starting extend-filesystems.service... Feb 9 09:47:03.712282 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:47:03.755109 jq[1711]: false Feb 9 09:47:03.714805 systemd[1]: Starting motdgen.service... Feb 9 09:47:03.718704 systemd[1]: Started nvidia.service. Feb 9 09:47:03.722873 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:47:03.727085 systemd[1]: Starting prepare-critools.service... Feb 9 09:47:03.732111 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:47:03.736795 systemd[1]: Starting sshd-keygen.service... Feb 9 09:47:03.744741 systemd[1]: Starting systemd-logind.service... Feb 9 09:47:03.746560 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:47:03.746697 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:47:03.748357 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:47:03.801887 jq[1721]: true Feb 9 09:47:03.750036 systemd[1]: Starting update-engine.service... Feb 9 09:47:03.755708 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:47:03.811231 tar[1724]: ./ Feb 9 09:47:03.811231 tar[1724]: ./macvlan Feb 9 09:47:03.811953 tar[1726]: crictl Feb 9 09:47:03.783409 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:47:03.783838 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:47:03.859305 jq[1732]: true Feb 9 09:47:03.815153 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:47:03.815557 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:47:03.973043 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:47:03.973407 systemd[1]: Finished motdgen.service. Feb 9 09:47:03.993965 dbus-daemon[1710]: [system] SELinux support is enabled Feb 9 09:47:03.997835 systemd[1]: Started dbus.service. Feb 9 09:47:04.006152 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:47:04.006210 systemd[1]: Reached target system-config.target. Feb 9 09:47:04.010737 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:47:04.010791 systemd[1]: Reached target user-config.target. Feb 9 09:47:04.025453 extend-filesystems[1712]: Found nvme0n1 Feb 9 09:47:04.031975 extend-filesystems[1712]: Found nvme0n1p1 Feb 9 09:47:04.034680 update_engine[1720]: I0209 09:47:04.034271 1720 main.cc:92] Flatcar Update Engine starting Feb 9 09:47:04.042865 extend-filesystems[1712]: Found nvme0n1p2 Feb 9 09:47:04.044615 extend-filesystems[1712]: Found nvme0n1p3 Feb 9 09:47:04.062309 dbus-daemon[1710]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1533 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 09:47:04.062655 extend-filesystems[1712]: Found usr Feb 9 09:47:04.067959 extend-filesystems[1712]: Found nvme0n1p4 Feb 9 09:47:04.067959 extend-filesystems[1712]: Found nvme0n1p6 Feb 9 09:47:04.081237 extend-filesystems[1712]: Found nvme0n1p7 Feb 9 09:47:04.081237 extend-filesystems[1712]: Found nvme0n1p9 Feb 9 09:47:04.081237 extend-filesystems[1712]: Checking size of /dev/nvme0n1p9 Feb 9 09:47:04.101608 bash[1765]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:47:04.088169 dbus-daemon[1710]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:47:04.097058 systemd[1]: Starting systemd-hostnamed.service... Feb 9 09:47:04.102070 amazon-ssm-agent[1707]: 2024/02/09 09:47:04 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 09:47:04.102243 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:47:04.111644 systemd[1]: Started update-engine.service. Feb 9 09:47:04.116773 systemd[1]: Started locksmithd.service. Feb 9 09:47:04.120647 update_engine[1720]: I0209 09:47:04.119299 1720 update_check_scheduler.cc:74] Next update check in 3m31s Feb 9 09:47:04.126520 amazon-ssm-agent[1707]: Initializing new seelog logger Feb 9 09:47:04.126520 amazon-ssm-agent[1707]: New Seelog Logger Creation Complete Feb 9 09:47:04.126520 amazon-ssm-agent[1707]: 2024/02/09 09:47:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:47:04.126520 amazon-ssm-agent[1707]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:47:04.126520 amazon-ssm-agent[1707]: 2024/02/09 09:47:04 processing appconfig overrides Feb 9 09:47:04.193808 extend-filesystems[1712]: Resized partition /dev/nvme0n1p9 Feb 9 09:47:04.212946 extend-filesystems[1781]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:47:04.235555 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 09:47:04.290139 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 09:47:04.319765 extend-filesystems[1781]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 09:47:04.319765 extend-filesystems[1781]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:47:04.319765 extend-filesystems[1781]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 09:47:04.327400 extend-filesystems[1712]: Resized filesystem in /dev/nvme0n1p9 Feb 9 09:47:04.339027 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:47:04.339450 systemd[1]: Finished extend-filesystems.service. Feb 9 09:47:04.359545 tar[1724]: ./static Feb 9 09:47:04.359719 env[1733]: time="2024-02-09T09:47:04.358425667Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:47:04.370434 systemd-logind[1719]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:47:04.373003 systemd-logind[1719]: New seat seat0. Feb 9 09:47:04.378417 systemd[1]: Started systemd-logind.service. Feb 9 09:47:04.477222 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:47:04.489039 tar[1724]: ./vlan Feb 9 09:47:04.521498 env[1733]: time="2024-02-09T09:47:04.521411389Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:47:04.521754 env[1733]: time="2024-02-09T09:47:04.521699279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:04.533952 env[1733]: time="2024-02-09T09:47:04.533866477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:47:04.533952 env[1733]: time="2024-02-09T09:47:04.533942430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:04.534414 env[1733]: time="2024-02-09T09:47:04.534346196Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:47:04.534555 env[1733]: time="2024-02-09T09:47:04.534440898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:04.534555 env[1733]: time="2024-02-09T09:47:04.534499400Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:47:04.534555 env[1733]: time="2024-02-09T09:47:04.534531437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:04.534764 env[1733]: time="2024-02-09T09:47:04.534719723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:04.535251 env[1733]: time="2024-02-09T09:47:04.535192926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:04.535588 env[1733]: time="2024-02-09T09:47:04.535529453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:47:04.535693 env[1733]: time="2024-02-09T09:47:04.535584351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:47:04.535762 env[1733]: time="2024-02-09T09:47:04.535719292Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:47:04.535762 env[1733]: time="2024-02-09T09:47:04.535750079Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:47:04.547915 env[1733]: time="2024-02-09T09:47:04.547838594Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:47:04.548071 env[1733]: time="2024-02-09T09:47:04.547917641Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:47:04.548071 env[1733]: time="2024-02-09T09:47:04.547952820Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:47:04.548071 env[1733]: time="2024-02-09T09:47:04.548035471Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548227 env[1733]: time="2024-02-09T09:47:04.548074425Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548227 env[1733]: time="2024-02-09T09:47:04.548110769Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548227 env[1733]: time="2024-02-09T09:47:04.548143558Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548810 env[1733]: time="2024-02-09T09:47:04.548755536Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548927 env[1733]: time="2024-02-09T09:47:04.548817036Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548927 env[1733]: time="2024-02-09T09:47:04.548852130Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548927 env[1733]: time="2024-02-09T09:47:04.548885235Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.548927 env[1733]: time="2024-02-09T09:47:04.548916883Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:47:04.549189 env[1733]: time="2024-02-09T09:47:04.549144814Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:47:04.549373 env[1733]: time="2024-02-09T09:47:04.549329168Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:47:04.550002 env[1733]: time="2024-02-09T09:47:04.549950733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:47:04.550106 env[1733]: time="2024-02-09T09:47:04.550020109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550106 env[1733]: time="2024-02-09T09:47:04.550055070Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:47:04.550218 env[1733]: time="2024-02-09T09:47:04.550175546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550384 env[1733]: time="2024-02-09T09:47:04.550332475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550492 env[1733]: time="2024-02-09T09:47:04.550384886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550492 env[1733]: time="2024-02-09T09:47:04.550417311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550492 env[1733]: time="2024-02-09T09:47:04.550448146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550740 env[1733]: time="2024-02-09T09:47:04.550499719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550740 env[1733]: time="2024-02-09T09:47:04.550533139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550740 env[1733]: time="2024-02-09T09:47:04.550563683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550740 env[1733]: time="2024-02-09T09:47:04.550602866Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:47:04.550963 env[1733]: time="2024-02-09T09:47:04.550906787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.550963 env[1733]: time="2024-02-09T09:47:04.550951965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.551068 env[1733]: time="2024-02-09T09:47:04.550984014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.551068 env[1733]: time="2024-02-09T09:47:04.551014193Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:47:04.551189 env[1733]: time="2024-02-09T09:47:04.551074043Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:47:04.551189 env[1733]: time="2024-02-09T09:47:04.551107912Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:47:04.551189 env[1733]: time="2024-02-09T09:47:04.551142739Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:47:04.551336 env[1733]: time="2024-02-09T09:47:04.551208062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:47:04.551719 env[1733]: time="2024-02-09T09:47:04.551587097Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:47:04.551719 env[1733]: time="2024-02-09T09:47:04.551710886Z" level=info msg="Connect containerd service" Feb 9 09:47:04.552963 env[1733]: time="2024-02-09T09:47:04.551776221Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:47:04.556988 env[1733]: time="2024-02-09T09:47:04.556914554Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:47:04.559525 env[1733]: time="2024-02-09T09:47:04.557257125Z" level=info msg="Start subscribing containerd event" Feb 9 09:47:04.559525 env[1733]: time="2024-02-09T09:47:04.557358985Z" level=info msg="Start recovering state" Feb 9 09:47:04.559525 env[1733]: time="2024-02-09T09:47:04.557413156Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:47:04.559525 env[1733]: time="2024-02-09T09:47:04.557608007Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:47:04.559525 env[1733]: time="2024-02-09T09:47:04.557710535Z" level=info msg="containerd successfully booted in 0.266350s" Feb 9 09:47:04.557842 systemd[1]: Started containerd.service. Feb 9 09:47:04.562518 env[1733]: time="2024-02-09T09:47:04.560675242Z" level=info msg="Start event monitor" Feb 9 09:47:04.562518 env[1733]: time="2024-02-09T09:47:04.560746850Z" level=info msg="Start snapshots syncer" Feb 9 09:47:04.562518 env[1733]: time="2024-02-09T09:47:04.560773668Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:47:04.562518 env[1733]: time="2024-02-09T09:47:04.560794007Z" level=info msg="Start streaming server" Feb 9 09:47:04.684760 tar[1724]: ./portmap Feb 9 09:47:04.709852 dbus-daemon[1710]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 09:47:04.710131 systemd[1]: Started systemd-hostnamed.service. Feb 9 09:47:04.711787 dbus-daemon[1710]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1772 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 09:47:04.717975 systemd[1]: Starting polkit.service... Feb 9 09:47:04.762020 polkitd[1849]: Started polkitd version 121 Feb 9 09:47:04.792696 polkitd[1849]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 09:47:04.793161 polkitd[1849]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 09:47:04.798369 polkitd[1849]: Finished loading, compiling and executing 2 rules Feb 9 09:47:04.801093 dbus-daemon[1710]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 09:47:04.801392 systemd[1]: Started polkit.service. Feb 9 09:47:04.803508 polkitd[1849]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 09:47:04.854659 systemd-resolved[1679]: System hostname changed to 'ip-172-31-16-94'. Feb 9 09:47:04.854666 systemd-hostnamed[1772]: Hostname set to (transient) Feb 9 09:47:04.893057 tar[1724]: ./host-local Feb 9 09:47:05.069457 tar[1724]: ./vrf Feb 9 09:47:05.081645 coreos-metadata[1709]: Feb 09 09:47:05.081 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 09:47:05.082141 coreos-metadata[1709]: Feb 09 09:47:05.081 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 09:47:05.082141 coreos-metadata[1709]: Feb 09 09:47:05.081 INFO Fetch successful Feb 9 09:47:05.082141 coreos-metadata[1709]: Feb 09 09:47:05.081 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 09:47:05.084844 coreos-metadata[1709]: Feb 09 09:47:05.084 INFO Fetch successful Feb 9 09:47:05.087086 unknown[1709]: wrote ssh authorized keys file for user: core Feb 9 09:47:05.115937 update-ssh-keys[1885]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:47:05.117794 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:47:05.198636 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Create new startup processor Feb 9 09:47:05.201041 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 09:47:05.201041 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing bookkeeping folders Feb 9 09:47:05.201232 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO removing the completed state files Feb 9 09:47:05.201232 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing bookkeeping folders for long running plugins Feb 9 09:47:05.201232 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 09:47:05.201232 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing healthcheck folders for long running plugins Feb 9 09:47:05.201232 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing locations for inventory plugin Feb 9 09:47:05.201232 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing default location for custom inventory Feb 9 09:47:05.201232 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing default location for file inventory Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Initializing default location for role inventory Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Init the cloudwatchlogs publisher Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:configurePackage Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:downloadContent Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:runDocument Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:configureDocker Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 09:47:05.201649 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO OS: linux, Arch: arm64 Feb 9 09:47:05.203572 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 09:47:05.206626 amazon-ssm-agent[1707]: datastore file /var/lib/amazon/ssm/i-0c6fb6b8f8409c5b8/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 09:47:05.232520 tar[1724]: ./bridge Feb 9 09:47:05.304990 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 09:47:05.362143 tar[1724]: ./tuning Feb 9 09:47:05.400137 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 09:47:05.460000 tar[1724]: ./firewall Feb 9 09:47:05.494720 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] Starting message polling Feb 9 09:47:05.572914 tar[1724]: ./host-device Feb 9 09:47:05.589492 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 09:47:05.672929 tar[1724]: ./sbr Feb 9 09:47:05.684405 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [instanceID=i-0c6fb6b8f8409c5b8] Starting association polling Feb 9 09:47:05.758198 tar[1724]: ./loopback Feb 9 09:47:05.779546 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 09:47:05.846638 tar[1724]: ./dhcp Feb 9 09:47:05.874872 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 09:47:05.940505 systemd[1]: Finished prepare-critools.service. Feb 9 09:47:05.970410 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 09:47:06.033567 tar[1724]: ./ptp Feb 9 09:47:06.066189 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 09:47:06.096316 tar[1724]: ./ipvlan Feb 9 09:47:06.158216 tar[1724]: ./bandwidth Feb 9 09:47:06.162095 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 09:47:06.237805 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:47:06.258227 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 09:47:06.316592 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:47:06.354518 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 09:47:06.451099 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 09:47:06.547849 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0c6fb6b8f8409c5b8, requestId: 90839c33-270c-4f4e-9b84-3ed1bd6365a1 Feb 9 09:47:06.644691 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 09:47:06.741767 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:47:06.839105 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] listening reply. Feb 9 09:47:06.936530 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [OfflineService] Starting document processing engine... Feb 9 09:47:07.034237 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 09:47:07.132138 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [StartupProcessor] Executing startup processor tasks Feb 9 09:47:07.230181 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [OfflineService] [EngineProcessor] Starting Feb 9 09:47:07.328397 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 09:47:07.426904 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [OfflineService] Starting message polling Feb 9 09:47:07.525491 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [OfflineService] Starting send replies to MDS Feb 9 09:47:07.620813 sshd_keygen[1751]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:47:07.624410 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 09:47:07.657874 systemd[1]: Finished sshd-keygen.service. Feb 9 09:47:07.662322 systemd[1]: Starting issuegen.service... Feb 9 09:47:07.673651 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:47:07.674009 systemd[1]: Finished issuegen.service. Feb 9 09:47:07.678462 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:47:07.692316 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:47:07.697142 systemd[1]: Started getty@tty1.service. Feb 9 09:47:07.701519 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 09:47:07.703717 systemd[1]: Reached target getty.target. Feb 9 09:47:07.705447 systemd[1]: Reached target multi-user.target. Feb 9 09:47:07.709843 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:47:07.723519 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 09:47:07.726659 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:47:07.727028 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:47:07.729194 systemd[1]: Startup finished in 1.182s (kernel) + 11.261s (initrd) + 12.872s (userspace) = 25.316s. Feb 9 09:47:07.822845 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 09:47:07.923040 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 09:47:08.023049 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c6fb6b8f8409c5b8?role=subscribe&stream=input Feb 9 09:47:08.123387 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c6fb6b8f8409c5b8?role=subscribe&stream=input Feb 9 09:47:08.223713 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 09:47:08.323996 amazon-ssm-agent[1707]: 2024-02-09 09:47:05 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 09:47:12.899361 systemd[1]: Created slice system-sshd.slice. Feb 9 09:47:12.902672 systemd[1]: Started sshd@0-172.31.16.94:22-139.178.89.65:55260.service. Feb 9 09:47:13.084076 sshd[1921]: Accepted publickey for core from 139.178.89.65 port 55260 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:13.088573 sshd[1921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:13.105522 systemd[1]: Created slice user-500.slice. Feb 9 09:47:13.108837 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:47:13.117609 systemd-logind[1719]: New session 1 of user core. Feb 9 09:47:13.127370 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:47:13.130398 systemd[1]: Starting user@500.service... Feb 9 09:47:13.137717 (systemd)[1924]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:13.314771 systemd[1924]: Queued start job for default target default.target. Feb 9 09:47:13.316528 systemd[1924]: Reached target paths.target. Feb 9 09:47:13.316815 systemd[1924]: Reached target sockets.target. Feb 9 09:47:13.317052 systemd[1924]: Reached target timers.target. Feb 9 09:47:13.317275 systemd[1924]: Reached target basic.target. Feb 9 09:47:13.317655 systemd[1924]: Reached target default.target. Feb 9 09:47:13.317738 systemd[1]: Started user@500.service. Feb 9 09:47:13.318038 systemd[1924]: Startup finished in 168ms. Feb 9 09:47:13.319868 systemd[1]: Started session-1.scope. Feb 9 09:47:13.468345 systemd[1]: Started sshd@1-172.31.16.94:22-139.178.89.65:55266.service. Feb 9 09:47:13.648745 sshd[1933]: Accepted publickey for core from 139.178.89.65 port 55266 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:13.651800 sshd[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:13.660910 systemd[1]: Started session-2.scope. Feb 9 09:47:13.661588 systemd-logind[1719]: New session 2 of user core. Feb 9 09:47:13.797453 sshd[1933]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:13.802423 systemd-logind[1719]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:47:13.803040 systemd[1]: sshd@1-172.31.16.94:22-139.178.89.65:55266.service: Deactivated successfully. Feb 9 09:47:13.804313 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:47:13.806028 systemd-logind[1719]: Removed session 2. Feb 9 09:47:13.826194 systemd[1]: Started sshd@2-172.31.16.94:22-139.178.89.65:55276.service. Feb 9 09:47:14.003164 sshd[1939]: Accepted publickey for core from 139.178.89.65 port 55276 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:14.006182 sshd[1939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:14.013573 systemd-logind[1719]: New session 3 of user core. Feb 9 09:47:14.015031 systemd[1]: Started session-3.scope. Feb 9 09:47:14.138781 sshd[1939]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:14.143205 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:47:14.144292 systemd[1]: sshd@2-172.31.16.94:22-139.178.89.65:55276.service: Deactivated successfully. Feb 9 09:47:14.145861 systemd-logind[1719]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:47:14.147861 systemd-logind[1719]: Removed session 3. Feb 9 09:47:14.166399 systemd[1]: Started sshd@3-172.31.16.94:22-139.178.89.65:55278.service. Feb 9 09:47:14.342302 sshd[1945]: Accepted publickey for core from 139.178.89.65 port 55278 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:14.344744 sshd[1945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:14.353197 systemd[1]: Started session-4.scope. Feb 9 09:47:14.354002 systemd-logind[1719]: New session 4 of user core. Feb 9 09:47:14.486256 sshd[1945]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:14.491494 systemd-logind[1719]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:47:14.492057 systemd[1]: sshd@3-172.31.16.94:22-139.178.89.65:55278.service: Deactivated successfully. Feb 9 09:47:14.493235 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:47:14.494692 systemd-logind[1719]: Removed session 4. Feb 9 09:47:14.514793 systemd[1]: Started sshd@4-172.31.16.94:22-139.178.89.65:55294.service. Feb 9 09:47:14.691861 sshd[1951]: Accepted publickey for core from 139.178.89.65 port 55294 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:14.694613 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:14.702818 systemd-logind[1719]: New session 5 of user core. Feb 9 09:47:14.703291 systemd[1]: Started session-5.scope. Feb 9 09:47:14.822127 sudo[1954]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:47:14.823172 sudo[1954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:47:15.468355 systemd[1]: Reloading. Feb 9 09:47:15.596405 /usr/lib/systemd/system-generators/torcx-generator[1984]: time="2024-02-09T09:47:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:15.605607 /usr/lib/systemd/system-generators/torcx-generator[1984]: time="2024-02-09T09:47:15Z" level=info msg="torcx already run" Feb 9 09:47:15.749547 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:15.750098 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:15.788167 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:15.982522 systemd[1]: Started kubelet.service. Feb 9 09:47:16.005158 systemd[1]: Starting coreos-metadata.service... Feb 9 09:47:16.111819 kubelet[2038]: E0209 09:47:16.111708 2038 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:47:16.116964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:47:16.117287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:47:16.176503 coreos-metadata[2046]: Feb 09 09:47:16.176 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 09:47:16.177877 coreos-metadata[2046]: Feb 09 09:47:16.177 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 9 09:47:16.178690 coreos-metadata[2046]: Feb 09 09:47:16.178 INFO Fetch successful Feb 9 09:47:16.178785 coreos-metadata[2046]: Feb 09 09:47:16.178 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 9 09:47:16.179515 coreos-metadata[2046]: Feb 09 09:47:16.179 INFO Fetch successful Feb 9 09:47:16.179627 coreos-metadata[2046]: Feb 09 09:47:16.179 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 9 09:47:16.180310 coreos-metadata[2046]: Feb 09 09:47:16.180 INFO Fetch successful Feb 9 09:47:16.180427 coreos-metadata[2046]: Feb 09 09:47:16.180 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 9 09:47:16.181169 coreos-metadata[2046]: Feb 09 09:47:16.181 INFO Fetch successful Feb 9 09:47:16.181255 coreos-metadata[2046]: Feb 09 09:47:16.181 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 9 09:47:16.181891 coreos-metadata[2046]: Feb 09 09:47:16.181 INFO Fetch successful Feb 9 09:47:16.181971 coreos-metadata[2046]: Feb 09 09:47:16.181 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 9 09:47:16.182611 coreos-metadata[2046]: Feb 09 09:47:16.182 INFO Fetch successful Feb 9 09:47:16.182693 coreos-metadata[2046]: Feb 09 09:47:16.182 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 9 09:47:16.183359 coreos-metadata[2046]: Feb 09 09:47:16.183 INFO Fetch successful Feb 9 09:47:16.183438 coreos-metadata[2046]: Feb 09 09:47:16.183 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 9 09:47:16.184234 coreos-metadata[2046]: Feb 09 09:47:16.184 INFO Fetch successful Feb 9 09:47:16.198160 systemd[1]: Finished coreos-metadata.service. Feb 9 09:47:16.600046 systemd[1]: Stopped kubelet.service. Feb 9 09:47:16.629986 systemd[1]: Reloading. Feb 9 09:47:16.747535 /usr/lib/systemd/system-generators/torcx-generator[2104]: time="2024-02-09T09:47:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:16.748078 /usr/lib/systemd/system-generators/torcx-generator[2104]: time="2024-02-09T09:47:16Z" level=info msg="torcx already run" Feb 9 09:47:16.915108 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:16.915384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:16.953600 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:17.159633 systemd[1]: Started kubelet.service. Feb 9 09:47:17.252914 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:17.253543 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:17.253754 kubelet[2157]: I0209 09:47:17.253678 2157 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:47:17.256086 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:17.256086 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:19.415270 kubelet[2157]: I0209 09:47:19.415224 2157 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:47:19.415270 kubelet[2157]: I0209 09:47:19.415269 2157 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:47:19.416013 kubelet[2157]: I0209 09:47:19.415648 2157 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:47:19.423149 kubelet[2157]: I0209 09:47:19.423094 2157 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:47:19.424725 kubelet[2157]: W0209 09:47:19.424686 2157 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:47:19.426024 kubelet[2157]: I0209 09:47:19.425988 2157 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:47:19.426595 kubelet[2157]: I0209 09:47:19.426552 2157 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:47:19.426685 kubelet[2157]: I0209 09:47:19.426671 2157 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:47:19.426843 kubelet[2157]: I0209 09:47:19.426722 2157 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:47:19.426843 kubelet[2157]: I0209 09:47:19.426748 2157 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:47:19.426974 kubelet[2157]: I0209 09:47:19.426924 2157 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:19.432153 kubelet[2157]: I0209 09:47:19.432118 2157 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:47:19.432378 kubelet[2157]: I0209 09:47:19.432354 2157 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:47:19.432614 kubelet[2157]: I0209 09:47:19.432587 2157 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:47:19.432770 kubelet[2157]: I0209 09:47:19.432746 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:47:19.433363 kubelet[2157]: E0209 09:47:19.433251 2157 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:19.433363 kubelet[2157]: E0209 09:47:19.433325 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:19.434364 kubelet[2157]: I0209 09:47:19.434324 2157 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:47:19.435071 kubelet[2157]: W0209 09:47:19.435033 2157 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:47:19.437558 kubelet[2157]: I0209 09:47:19.436276 2157 server.go:1186] "Started kubelet" Feb 9 09:47:19.439109 kubelet[2157]: I0209 09:47:19.439075 2157 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:47:19.440286 kubelet[2157]: I0209 09:47:19.440251 2157 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:47:19.442686 kubelet[2157]: E0209 09:47:19.442650 2157 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:47:19.442872 kubelet[2157]: E0209 09:47:19.442848 2157 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:47:19.443067 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:47:19.443635 kubelet[2157]: I0209 09:47:19.443286 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:47:19.450739 kubelet[2157]: I0209 09:47:19.450702 2157 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:47:19.452823 kubelet[2157]: I0209 09:47:19.452787 2157 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:47:19.455115 kubelet[2157]: E0209 09:47:19.454827 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5f6820a87", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 436225159, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 436225159, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.455441 kubelet[2157]: W0209 09:47:19.455377 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:19.455441 kubelet[2157]: E0209 09:47:19.455415 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:19.455609 kubelet[2157]: W0209 09:47:19.455497 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:19.455609 kubelet[2157]: E0209 09:47:19.455523 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:19.455807 kubelet[2157]: E0209 09:47:19.455764 2157 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.16.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:47:19.466236 kubelet[2157]: W0209 09:47:19.466185 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:19.466236 kubelet[2157]: E0209 09:47:19.466235 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:19.466485 kubelet[2157]: E0209 09:47:19.466341 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5f6e6d691", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 442830993, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 442830993, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.530804 kubelet[2157]: I0209 09:47:19.530770 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:47:19.531014 kubelet[2157]: I0209 09:47:19.530992 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:47:19.531139 kubelet[2157]: I0209 09:47:19.531119 2157 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:19.531663 kubelet[2157]: E0209 09:47:19.531532 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cdbb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.534255 kubelet[2157]: E0209 09:47:19.534119 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cf9e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.540860 kubelet[2157]: E0209 09:47:19.535792 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0d15b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.541744 kubelet[2157]: I0209 09:47:19.541696 2157 policy_none.go:49] "None policy: Start" Feb 9 09:47:19.543051 kubelet[2157]: I0209 09:47:19.543008 2157 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:47:19.543201 kubelet[2157]: I0209 09:47:19.543060 2157 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:47:19.553526 kubelet[2157]: I0209 09:47:19.552256 2157 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.94" Feb 9 09:47:19.553256 systemd[1]: Created slice kubepods.slice. Feb 9 09:47:19.555135 kubelet[2157]: E0209 09:47:19.554246 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.94" Feb 9 09:47:19.555135 kubelet[2157]: E0209 09:47:19.554913 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cdbb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 552197934, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cdbb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.557960 kubelet[2157]: E0209 09:47:19.557709 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cf9e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 552206071, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cf9e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.561946 kubelet[2157]: E0209 09:47:19.561752 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0d15b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 552213402, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0d15b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.565127 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:47:19.574590 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:47:19.584294 kubelet[2157]: I0209 09:47:19.584259 2157 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:47:19.584952 kubelet[2157]: I0209 09:47:19.584920 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:47:19.587862 kubelet[2157]: E0209 09:47:19.586674 2157 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.94\" not found" Feb 9 09:47:19.596266 kubelet[2157]: E0209 09:47:19.596083 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5ffec09a8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 594166696, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 594166696, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.658159 kubelet[2157]: E0209 09:47:19.658122 2157 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.16.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:47:19.690131 kubelet[2157]: I0209 09:47:19.689995 2157 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:47:19.734283 kubelet[2157]: I0209 09:47:19.734230 2157 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:47:19.734283 kubelet[2157]: I0209 09:47:19.734272 2157 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:47:19.734529 kubelet[2157]: I0209 09:47:19.734307 2157 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:47:19.734529 kubelet[2157]: E0209 09:47:19.734384 2157 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:47:19.736986 kubelet[2157]: W0209 09:47:19.736945 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:19.737225 kubelet[2157]: E0209 09:47:19.737201 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:19.755778 kubelet[2157]: I0209 09:47:19.755744 2157 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.94" Feb 9 09:47:19.757304 kubelet[2157]: E0209 09:47:19.757267 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.94" Feb 9 09:47:19.757772 kubelet[2157]: E0209 09:47:19.757662 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cdbb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 755693733, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cdbb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.759375 kubelet[2157]: E0209 09:47:19.759258 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cf9e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 755701136, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cf9e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:19.839352 kubelet[2157]: E0209 09:47:19.839223 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0d15b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 755706112, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0d15b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:20.060695 kubelet[2157]: E0209 09:47:20.060660 2157 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.16.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:47:20.158426 kubelet[2157]: I0209 09:47:20.158365 2157 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.94" Feb 9 09:47:20.159924 kubelet[2157]: E0209 09:47:20.159878 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.94" Feb 9 09:47:20.160336 kubelet[2157]: E0209 09:47:20.160228 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cdbb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 20, 158318924, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cdbb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:20.238360 kubelet[2157]: E0209 09:47:20.238237 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cf9e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 20, 158326638, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cf9e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:20.433732 kubelet[2157]: E0209 09:47:20.433586 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:20.438532 kubelet[2157]: E0209 09:47:20.438385 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0d15b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 20, 158331264, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0d15b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:20.715111 kubelet[2157]: W0209 09:47:20.714976 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:20.715111 kubelet[2157]: E0209 09:47:20.715051 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:20.863159 kubelet[2157]: E0209 09:47:20.863102 2157 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.16.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:47:20.884685 kubelet[2157]: W0209 09:47:20.884635 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:20.884836 kubelet[2157]: E0209 09:47:20.884707 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:20.961124 kubelet[2157]: I0209 09:47:20.961071 2157 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.94" Feb 9 09:47:20.962828 kubelet[2157]: E0209 09:47:20.962793 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.94" Feb 9 09:47:20.963342 kubelet[2157]: E0209 09:47:20.963204 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cdbb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 20, 961023114, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cdbb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:20.965245 kubelet[2157]: E0209 09:47:20.965039 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cf9e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 20, 961030420, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cf9e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:20.966189 kubelet[2157]: W0209 09:47:20.966158 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:20.966401 kubelet[2157]: E0209 09:47:20.966380 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:20.985682 kubelet[2157]: W0209 09:47:20.985640 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:20.985682 kubelet[2157]: E0209 09:47:20.985684 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:21.039587 kubelet[2157]: E0209 09:47:21.039438 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0d15b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 20, 961035046, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0d15b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:21.433883 kubelet[2157]: E0209 09:47:21.433833 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:22.435195 kubelet[2157]: E0209 09:47:22.435122 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:22.465330 kubelet[2157]: E0209 09:47:22.465276 2157 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.16.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:47:22.563832 kubelet[2157]: I0209 09:47:22.563781 2157 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.94" Feb 9 09:47:22.565457 kubelet[2157]: E0209 09:47:22.565423 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.94" Feb 9 09:47:22.565895 kubelet[2157]: E0209 09:47:22.565783 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cdbb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 22, 563735280, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cdbb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:22.567595 kubelet[2157]: E0209 09:47:22.567447 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cf9e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 22, 563742620, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cf9e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:22.569001 kubelet[2157]: E0209 09:47:22.568888 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0d15b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 22, 563747461, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0d15b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:22.694456 kubelet[2157]: W0209 09:47:22.694326 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:22.694669 kubelet[2157]: E0209 09:47:22.694647 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:22.869462 kubelet[2157]: W0209 09:47:22.869426 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:22.869695 kubelet[2157]: E0209 09:47:22.869672 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:23.436035 kubelet[2157]: E0209 09:47:23.435992 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:23.894653 kubelet[2157]: W0209 09:47:23.894616 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:23.894859 kubelet[2157]: E0209 09:47:23.894838 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:23.900967 kubelet[2157]: W0209 09:47:23.900936 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:23.901130 kubelet[2157]: E0209 09:47:23.901110 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:24.436923 kubelet[2157]: E0209 09:47:24.436884 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:25.437936 kubelet[2157]: E0209 09:47:25.437896 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:25.667731 kubelet[2157]: E0209 09:47:25.667676 2157 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.16.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:47:25.767239 kubelet[2157]: I0209 09:47:25.767088 2157 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.94" Feb 9 09:47:25.769503 kubelet[2157]: E0209 09:47:25.769364 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cdbb2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529208754, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 25, 767039416, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cdbb2" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:25.769809 kubelet[2157]: E0209 09:47:25.769781 2157 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.94" Feb 9 09:47:25.770796 kubelet[2157]: E0209 09:47:25.770700 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0cf9e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529216481, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 25, 767047221, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0cf9e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:25.772445 kubelet[2157]: E0209 09:47:25.772334 2157 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.94.17b228c5fc0d15b8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.94", UID:"172.31.16.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 19, 529223608, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 25, 767051965, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.94.17b228c5fc0d15b8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:25.804820 amazon-ssm-agent[1707]: 2024-02-09 09:47:25 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 09:47:25.950128 kubelet[2157]: W0209 09:47:25.950086 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:25.950268 kubelet[2157]: E0209 09:47:25.950137 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:26.439133 kubelet[2157]: E0209 09:47:26.439061 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:27.439450 kubelet[2157]: E0209 09:47:27.439408 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:27.762306 kubelet[2157]: W0209 09:47:27.762169 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:27.762306 kubelet[2157]: E0209 09:47:27.762243 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:28.041568 kubelet[2157]: W0209 09:47:28.041349 2157 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:28.041568 kubelet[2157]: E0209 09:47:28.041424 2157 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:28.440662 kubelet[2157]: E0209 09:47:28.440565 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:29.423283 kubelet[2157]: I0209 09:47:29.423221 2157 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 09:47:29.441688 kubelet[2157]: E0209 09:47:29.441656 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:29.586938 kubelet[2157]: E0209 09:47:29.586892 2157 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.94\" not found" Feb 9 09:47:29.815577 kubelet[2157]: E0209 09:47:29.815522 2157 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.16.94" not found Feb 9 09:47:30.442864 kubelet[2157]: E0209 09:47:30.442809 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:31.058626 kubelet[2157]: E0209 09:47:31.058583 2157 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.16.94" not found Feb 9 09:47:31.444496 kubelet[2157]: E0209 09:47:31.444441 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:32.075054 kubelet[2157]: E0209 09:47:32.075022 2157 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.94\" not found" node="172.31.16.94" Feb 9 09:47:32.171390 kubelet[2157]: I0209 09:47:32.171360 2157 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.94" Feb 9 09:47:32.445403 kubelet[2157]: E0209 09:47:32.445269 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:32.461134 kubelet[2157]: I0209 09:47:32.461093 2157 kubelet_node_status.go:73] "Successfully registered node" node="172.31.16.94" Feb 9 09:47:32.475924 kubelet[2157]: E0209 09:47:32.475861 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:32.576745 kubelet[2157]: E0209 09:47:32.576700 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:32.677053 kubelet[2157]: E0209 09:47:32.676993 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:32.685698 sudo[1954]: pam_unix(sudo:session): session closed for user root Feb 9 09:47:32.710119 sshd[1951]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:32.715728 systemd[1]: sshd@4-172.31.16.94:22-139.178.89.65:55294.service: Deactivated successfully. Feb 9 09:47:32.717008 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:47:32.718217 systemd-logind[1719]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:47:32.719958 systemd-logind[1719]: Removed session 5. Feb 9 09:47:32.777351 kubelet[2157]: E0209 09:47:32.777307 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:32.878389 kubelet[2157]: E0209 09:47:32.878335 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:32.979116 kubelet[2157]: E0209 09:47:32.979014 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.080212 kubelet[2157]: E0209 09:47:33.080156 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.180831 kubelet[2157]: E0209 09:47:33.180793 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.281448 kubelet[2157]: E0209 09:47:33.281348 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.382040 kubelet[2157]: E0209 09:47:33.381990 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.445819 kubelet[2157]: E0209 09:47:33.445763 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:33.482787 kubelet[2157]: E0209 09:47:33.482756 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.583427 kubelet[2157]: E0209 09:47:33.583372 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.684347 kubelet[2157]: E0209 09:47:33.684294 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.784782 kubelet[2157]: E0209 09:47:33.784748 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.885456 kubelet[2157]: E0209 09:47:33.885354 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:33.986412 kubelet[2157]: E0209 09:47:33.986359 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.086978 kubelet[2157]: E0209 09:47:34.086931 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.187333 kubelet[2157]: E0209 09:47:34.187218 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.287856 kubelet[2157]: E0209 09:47:34.287805 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.388432 kubelet[2157]: E0209 09:47:34.388386 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.446322 kubelet[2157]: E0209 09:47:34.446209 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:34.488768 kubelet[2157]: E0209 09:47:34.488716 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.588852 kubelet[2157]: E0209 09:47:34.588803 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.689678 kubelet[2157]: E0209 09:47:34.689628 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.790778 kubelet[2157]: E0209 09:47:34.790663 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.866042 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 09:47:34.891370 kubelet[2157]: E0209 09:47:34.891299 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:34.992050 kubelet[2157]: E0209 09:47:34.992006 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.092804 kubelet[2157]: E0209 09:47:35.092759 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.193560 kubelet[2157]: E0209 09:47:35.193516 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.294809 kubelet[2157]: E0209 09:47:35.294778 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.395636 kubelet[2157]: E0209 09:47:35.395538 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.447554 kubelet[2157]: E0209 09:47:35.447524 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:35.496036 kubelet[2157]: E0209 09:47:35.496007 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.596723 kubelet[2157]: E0209 09:47:35.596662 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.697543 kubelet[2157]: E0209 09:47:35.697398 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.797862 kubelet[2157]: E0209 09:47:35.797809 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.898423 kubelet[2157]: E0209 09:47:35.898373 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:35.999363 kubelet[2157]: E0209 09:47:35.999270 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:36.099964 kubelet[2157]: E0209 09:47:36.099926 2157 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.94\" not found" Feb 9 09:47:36.201634 kubelet[2157]: I0209 09:47:36.201602 2157 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 09:47:36.202420 env[1733]: time="2024-02-09T09:47:36.202347360Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:47:36.203062 kubelet[2157]: I0209 09:47:36.202759 2157 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 09:47:36.442871 kubelet[2157]: I0209 09:47:36.442837 2157 apiserver.go:52] "Watching apiserver" Feb 9 09:47:36.446796 kubelet[2157]: I0209 09:47:36.446759 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:36.447061 kubelet[2157]: I0209 09:47:36.447038 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:36.448362 kubelet[2157]: E0209 09:47:36.448315 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:36.454185 kubelet[2157]: I0209 09:47:36.454147 2157 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:47:36.456820 systemd[1]: Created slice kubepods-besteffort-pod10d325b3_439b_4697_9ba0_2fd88988d7f5.slice. Feb 9 09:47:36.481703 systemd[1]: Created slice kubepods-burstable-podb0af1f70_c382_40a3_b10d_f69950757c72.slice. Feb 9 09:47:36.548633 kubelet[2157]: I0209 09:47:36.548601 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10d325b3-439b-4697-9ba0-2fd88988d7f5-xtables-lock\") pod \"kube-proxy-k4966\" (UID: \"10d325b3-439b-4697-9ba0-2fd88988d7f5\") " pod="kube-system/kube-proxy-k4966" Feb 9 09:47:36.548957 kubelet[2157]: I0209 09:47:36.548915 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-run\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.549147 kubelet[2157]: I0209 09:47:36.549116 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-kernel\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.549327 kubelet[2157]: I0209 09:47:36.549285 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpv4t\" (UniqueName: \"kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-kube-api-access-hpv4t\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.549588 kubelet[2157]: I0209 09:47:36.549567 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-hostproc\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.549774 kubelet[2157]: I0209 09:47:36.549744 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0af1f70-c382-40a3-b10d-f69950757c72-clustermesh-secrets\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.549970 kubelet[2157]: I0209 09:47:36.549908 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-config-path\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.550153 kubelet[2157]: I0209 09:47:36.550122 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10d325b3-439b-4697-9ba0-2fd88988d7f5-kube-proxy\") pod \"kube-proxy-k4966\" (UID: \"10d325b3-439b-4697-9ba0-2fd88988d7f5\") " pod="kube-system/kube-proxy-k4966" Feb 9 09:47:36.550336 kubelet[2157]: I0209 09:47:36.550306 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrqz\" (UniqueName: \"kubernetes.io/projected/10d325b3-439b-4697-9ba0-2fd88988d7f5-kube-api-access-7rrqz\") pod \"kube-proxy-k4966\" (UID: \"10d325b3-439b-4697-9ba0-2fd88988d7f5\") " pod="kube-system/kube-proxy-k4966" Feb 9 09:47:36.550521 kubelet[2157]: I0209 09:47:36.550498 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cni-path\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.550717 kubelet[2157]: I0209 09:47:36.550685 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-etc-cni-netd\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.550883 kubelet[2157]: I0209 09:47:36.550851 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-net\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.551074 kubelet[2157]: I0209 09:47:36.551036 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10d325b3-439b-4697-9ba0-2fd88988d7f5-lib-modules\") pod \"kube-proxy-k4966\" (UID: \"10d325b3-439b-4697-9ba0-2fd88988d7f5\") " pod="kube-system/kube-proxy-k4966" Feb 9 09:47:36.551303 kubelet[2157]: I0209 09:47:36.551275 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-bpf-maps\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.551524 kubelet[2157]: I0209 09:47:36.551506 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-cgroup\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.551703 kubelet[2157]: I0209 09:47:36.551671 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-lib-modules\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.551880 kubelet[2157]: I0209 09:47:36.551842 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-xtables-lock\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.552057 kubelet[2157]: I0209 09:47:36.552026 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-hubble-tls\") pod \"cilium-g6rcx\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " pod="kube-system/cilium-g6rcx" Feb 9 09:47:36.552203 kubelet[2157]: I0209 09:47:36.552182 2157 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:47:36.795643 env[1733]: time="2024-02-09T09:47:36.795438306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6rcx,Uid:b0af1f70-c382-40a3-b10d-f69950757c72,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:37.077806 env[1733]: time="2024-02-09T09:47:37.077710846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k4966,Uid:10d325b3-439b-4697-9ba0-2fd88988d7f5,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:37.346438 env[1733]: time="2024-02-09T09:47:37.346275144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.349563 env[1733]: time="2024-02-09T09:47:37.349514508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.356179 env[1733]: time="2024-02-09T09:47:37.356091420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.359608 env[1733]: time="2024-02-09T09:47:37.359556865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.364055 env[1733]: time="2024-02-09T09:47:37.364008761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.366715 env[1733]: time="2024-02-09T09:47:37.366655991Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.369035 env[1733]: time="2024-02-09T09:47:37.368975513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.374137 env[1733]: time="2024-02-09T09:47:37.374084326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:37.406580 env[1733]: time="2024-02-09T09:47:37.406156538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:37.406580 env[1733]: time="2024-02-09T09:47:37.406236707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:37.406580 env[1733]: time="2024-02-09T09:47:37.406264706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:37.406871 env[1733]: time="2024-02-09T09:47:37.406658698Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/279d484d038eda78eed58d2bd03914291b5f972507056fc0e6cc1095e7d057b1 pid=2250 runtime=io.containerd.runc.v2 Feb 9 09:47:37.420417 env[1733]: time="2024-02-09T09:47:37.420266100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:37.420417 env[1733]: time="2024-02-09T09:47:37.420347901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:37.420417 env[1733]: time="2024-02-09T09:47:37.420374640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:37.421261 env[1733]: time="2024-02-09T09:47:37.421152867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2 pid=2268 runtime=io.containerd.runc.v2 Feb 9 09:47:37.447533 systemd[1]: Started cri-containerd-279d484d038eda78eed58d2bd03914291b5f972507056fc0e6cc1095e7d057b1.scope. Feb 9 09:47:37.451211 kubelet[2157]: E0209 09:47:37.449272 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:37.453196 systemd[1]: Started cri-containerd-721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2.scope. Feb 9 09:47:37.541516 env[1733]: time="2024-02-09T09:47:37.541341728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6rcx,Uid:b0af1f70-c382-40a3-b10d-f69950757c72,Namespace:kube-system,Attempt:0,} returns sandbox id \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\"" Feb 9 09:47:37.546146 env[1733]: time="2024-02-09T09:47:37.546084259Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:47:37.554782 env[1733]: time="2024-02-09T09:47:37.554698470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k4966,Uid:10d325b3-439b-4697-9ba0-2fd88988d7f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"279d484d038eda78eed58d2bd03914291b5f972507056fc0e6cc1095e7d057b1\"" Feb 9 09:47:37.666751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161000512.mount: Deactivated successfully. Feb 9 09:47:38.449682 kubelet[2157]: E0209 09:47:38.449595 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:39.433641 kubelet[2157]: E0209 09:47:39.432726 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:39.450115 kubelet[2157]: E0209 09:47:39.449970 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:40.451119 kubelet[2157]: E0209 09:47:40.451068 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:41.451890 kubelet[2157]: E0209 09:47:41.451820 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:42.452289 kubelet[2157]: E0209 09:47:42.452187 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:43.452756 kubelet[2157]: E0209 09:47:43.452691 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:43.903746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76782912.mount: Deactivated successfully. Feb 9 09:47:44.453420 kubelet[2157]: E0209 09:47:44.453309 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:45.454981 kubelet[2157]: E0209 09:47:45.454898 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:46.455183 kubelet[2157]: E0209 09:47:46.455127 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:47.455762 kubelet[2157]: E0209 09:47:47.455711 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:47.782395 env[1733]: time="2024-02-09T09:47:47.781955194Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:47.789453 env[1733]: time="2024-02-09T09:47:47.789367376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:47.794968 env[1733]: time="2024-02-09T09:47:47.794903442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:47.795581 env[1733]: time="2024-02-09T09:47:47.795507861Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:47:47.797616 env[1733]: time="2024-02-09T09:47:47.797550609Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:47:47.800251 env[1733]: time="2024-02-09T09:47:47.800191980Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:47:47.821492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421039354.mount: Deactivated successfully. Feb 9 09:47:47.834340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433151865.mount: Deactivated successfully. Feb 9 09:47:47.840551 env[1733]: time="2024-02-09T09:47:47.840443623Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\"" Feb 9 09:47:47.841867 env[1733]: time="2024-02-09T09:47:47.841812564Z" level=info msg="StartContainer for \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\"" Feb 9 09:47:47.877647 systemd[1]: Started cri-containerd-c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71.scope. Feb 9 09:47:47.943571 env[1733]: time="2024-02-09T09:47:47.943489274Z" level=info msg="StartContainer for \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\" returns successfully" Feb 9 09:47:47.963484 systemd[1]: cri-containerd-c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71.scope: Deactivated successfully. Feb 9 09:47:48.456766 kubelet[2157]: E0209 09:47:48.456710 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:48.818129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71-rootfs.mount: Deactivated successfully. Feb 9 09:47:49.414343 update_engine[1720]: I0209 09:47:49.414273 1720 update_attempter.cc:509] Updating boot flags... Feb 9 09:47:49.457344 kubelet[2157]: E0209 09:47:49.457087 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:49.680102 env[1733]: time="2024-02-09T09:47:49.679677217Z" level=info msg="shim disconnected" id=c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71 Feb 9 09:47:49.680840 env[1733]: time="2024-02-09T09:47:49.680792502Z" level=warning msg="cleaning up after shim disconnected" id=c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71 namespace=k8s.io Feb 9 09:47:49.680985 env[1733]: time="2024-02-09T09:47:49.680955879Z" level=info msg="cleaning up dead shim" Feb 9 09:47:49.710664 env[1733]: time="2024-02-09T09:47:49.710604785Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2435 runtime=io.containerd.runc.v2\n" Feb 9 09:47:49.818157 env[1733]: time="2024-02-09T09:47:49.818077308Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:47:49.847751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260136875.mount: Deactivated successfully. Feb 9 09:47:49.858496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341853802.mount: Deactivated successfully. Feb 9 09:47:49.885184 env[1733]: time="2024-02-09T09:47:49.885093766Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\"" Feb 9 09:47:49.888105 env[1733]: time="2024-02-09T09:47:49.888051423Z" level=info msg="StartContainer for \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\"" Feb 9 09:47:49.950387 systemd[1]: Started cri-containerd-a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf.scope. Feb 9 09:47:50.046826 env[1733]: time="2024-02-09T09:47:50.046752287Z" level=info msg="StartContainer for \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\" returns successfully" Feb 9 09:47:50.077059 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:47:50.077653 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:47:50.077912 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:47:50.082651 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:50.083910 systemd[1]: cri-containerd-a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf.scope: Deactivated successfully. Feb 9 09:47:50.125045 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:50.328538 env[1733]: time="2024-02-09T09:47:50.328426730Z" level=info msg="shim disconnected" id=a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf Feb 9 09:47:50.328538 env[1733]: time="2024-02-09T09:47:50.328532648Z" level=warning msg="cleaning up after shim disconnected" id=a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf namespace=k8s.io Feb 9 09:47:50.328866 env[1733]: time="2024-02-09T09:47:50.328556169Z" level=info msg="cleaning up dead shim" Feb 9 09:47:50.373055 env[1733]: time="2024-02-09T09:47:50.372981543Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2629 runtime=io.containerd.runc.v2\n" Feb 9 09:47:50.460788 kubelet[2157]: E0209 09:47:50.460625 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:50.813182 env[1733]: time="2024-02-09T09:47:50.813033867Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:47:50.843305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf-rootfs.mount: Deactivated successfully. Feb 9 09:47:50.847743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212733501.mount: Deactivated successfully. Feb 9 09:47:50.859619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount185011112.mount: Deactivated successfully. Feb 9 09:47:50.889413 env[1733]: time="2024-02-09T09:47:50.889349224Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\"" Feb 9 09:47:50.890792 env[1733]: time="2024-02-09T09:47:50.890743402Z" level=info msg="StartContainer for \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\"" Feb 9 09:47:50.946134 systemd[1]: Started cri-containerd-acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05.scope. Feb 9 09:47:51.023578 systemd[1]: cri-containerd-acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05.scope: Deactivated successfully. Feb 9 09:47:51.024126 env[1733]: time="2024-02-09T09:47:51.024069592Z" level=info msg="StartContainer for \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\" returns successfully" Feb 9 09:47:51.167175 env[1733]: time="2024-02-09T09:47:51.166393853Z" level=info msg="shim disconnected" id=acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05 Feb 9 09:47:51.167175 env[1733]: time="2024-02-09T09:47:51.166545181Z" level=warning msg="cleaning up after shim disconnected" id=acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05 namespace=k8s.io Feb 9 09:47:51.167175 env[1733]: time="2024-02-09T09:47:51.166568918Z" level=info msg="cleaning up dead shim" Feb 9 09:47:51.181802 env[1733]: time="2024-02-09T09:47:51.181740648Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2766 runtime=io.containerd.runc.v2\n" Feb 9 09:47:51.461744 kubelet[2157]: E0209 09:47:51.461201 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:51.616796 env[1733]: time="2024-02-09T09:47:51.616741470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:51.621297 env[1733]: time="2024-02-09T09:47:51.621250289Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:51.625221 env[1733]: time="2024-02-09T09:47:51.625175540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:51.629388 env[1733]: time="2024-02-09T09:47:51.629343360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:51.630798 env[1733]: time="2024-02-09T09:47:51.630743114Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:47:51.635416 env[1733]: time="2024-02-09T09:47:51.635361403Z" level=info msg="CreateContainer within sandbox \"279d484d038eda78eed58d2bd03914291b5f972507056fc0e6cc1095e7d057b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:47:51.661160 env[1733]: time="2024-02-09T09:47:51.661078150Z" level=info msg="CreateContainer within sandbox \"279d484d038eda78eed58d2bd03914291b5f972507056fc0e6cc1095e7d057b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"218bc9968ace418d14e49e6b54d3d80187eb97f8490804cfcc233b69b840bcdc\"" Feb 9 09:47:51.662009 env[1733]: time="2024-02-09T09:47:51.661939952Z" level=info msg="StartContainer for \"218bc9968ace418d14e49e6b54d3d80187eb97f8490804cfcc233b69b840bcdc\"" Feb 9 09:47:51.692547 systemd[1]: Started cri-containerd-218bc9968ace418d14e49e6b54d3d80187eb97f8490804cfcc233b69b840bcdc.scope. Feb 9 09:47:51.768380 env[1733]: time="2024-02-09T09:47:51.767625495Z" level=info msg="StartContainer for \"218bc9968ace418d14e49e6b54d3d80187eb97f8490804cfcc233b69b840bcdc\" returns successfully" Feb 9 09:47:51.820899 env[1733]: time="2024-02-09T09:47:51.820840377Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:47:51.850242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05-rootfs.mount: Deactivated successfully. Feb 9 09:47:51.878136 env[1733]: time="2024-02-09T09:47:51.878070094Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\"" Feb 9 09:47:51.879536 env[1733]: time="2024-02-09T09:47:51.879459528Z" level=info msg="StartContainer for \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\"" Feb 9 09:47:51.918714 systemd[1]: Started cri-containerd-ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3.scope. Feb 9 09:47:51.968930 systemd[1]: cri-containerd-ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3.scope: Deactivated successfully. Feb 9 09:47:51.980290 env[1733]: time="2024-02-09T09:47:51.980047910Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0af1f70_c382_40a3_b10d_f69950757c72.slice/cri-containerd-ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3.scope/memory.events\": no such file or directory" Feb 9 09:47:51.988046 env[1733]: time="2024-02-09T09:47:51.987762878Z" level=info msg="StartContainer for \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\" returns successfully" Feb 9 09:47:52.173953 env[1733]: time="2024-02-09T09:47:52.173824808Z" level=info msg="shim disconnected" id=ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3 Feb 9 09:47:52.174241 env[1733]: time="2024-02-09T09:47:52.173973387Z" level=warning msg="cleaning up after shim disconnected" id=ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3 namespace=k8s.io Feb 9 09:47:52.174241 env[1733]: time="2024-02-09T09:47:52.174140940Z" level=info msg="cleaning up dead shim" Feb 9 09:47:52.195874 env[1733]: time="2024-02-09T09:47:52.195810875Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Feb 9 09:47:52.461484 kubelet[2157]: E0209 09:47:52.461327 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:52.828154 env[1733]: time="2024-02-09T09:47:52.828096116Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:47:52.846873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3-rootfs.mount: Deactivated successfully. Feb 9 09:47:52.848233 kubelet[2157]: I0209 09:47:52.848195 2157 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k4966" podStartSLOduration=-9.223372016006683e+09 pod.CreationTimestamp="2024-02-09 09:47:32 +0000 UTC" firstStartedPulling="2024-02-09 09:47:37.556716746 +0000 UTC m=+20.391651539" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:51.863925751 +0000 UTC m=+34.698860652" watchObservedRunningTime="2024-02-09 09:47:52.848092616 +0000 UTC m=+35.683027433" Feb 9 09:47:52.856582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201277353.mount: Deactivated successfully. Feb 9 09:47:52.869133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382326316.mount: Deactivated successfully. Feb 9 09:47:52.874651 env[1733]: time="2024-02-09T09:47:52.874564905Z" level=info msg="CreateContainer within sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\"" Feb 9 09:47:52.875836 env[1733]: time="2024-02-09T09:47:52.875754909Z" level=info msg="StartContainer for \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\"" Feb 9 09:47:52.906773 systemd[1]: Started cri-containerd-1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65.scope. Feb 9 09:47:52.984314 env[1733]: time="2024-02-09T09:47:52.984227822Z" level=info msg="StartContainer for \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\" returns successfully" Feb 9 09:47:53.194520 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:53.205672 kubelet[2157]: I0209 09:47:53.205076 2157 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:47:53.461996 kubelet[2157]: E0209 09:47:53.461845 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:53.798510 kernel: Initializing XFRM netlink socket Feb 9 09:47:53.804550 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:53.856188 kubelet[2157]: I0209 09:47:53.856131 2157 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-g6rcx" podStartSLOduration=-9.223372014998718e+09 pod.CreationTimestamp="2024-02-09 09:47:32 +0000 UTC" firstStartedPulling="2024-02-09 09:47:37.544551221 +0000 UTC m=+20.379486014" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:47:53.856031413 +0000 UTC m=+36.690966290" watchObservedRunningTime="2024-02-09 09:47:53.85605719 +0000 UTC m=+36.690991995" Feb 9 09:47:54.462734 kubelet[2157]: E0209 09:47:54.462688 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:55.210768 (udev-worker)[2380]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:55.212222 (udev-worker)[2393]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:55.219799 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:47:55.219946 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:47:55.215714 systemd-networkd[1533]: cilium_host: Link UP Feb 9 09:47:55.216090 systemd-networkd[1533]: cilium_net: Link UP Feb 9 09:47:55.217189 systemd-networkd[1533]: cilium_net: Gained carrier Feb 9 09:47:55.221250 systemd-networkd[1533]: cilium_host: Gained carrier Feb 9 09:47:55.251728 systemd-networkd[1533]: cilium_net: Gained IPv6LL Feb 9 09:47:55.388920 systemd-networkd[1533]: cilium_vxlan: Link UP Feb 9 09:47:55.388944 systemd-networkd[1533]: cilium_vxlan: Gained carrier Feb 9 09:47:55.463188 kubelet[2157]: E0209 09:47:55.463032 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:55.571691 systemd-networkd[1533]: cilium_host: Gained IPv6LL Feb 9 09:47:55.839016 amazon-ssm-agent[1707]: 2024-02-09 09:47:55 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 09:47:55.855507 kernel: NET: Registered PF_ALG protocol family Feb 9 09:47:56.464007 kubelet[2157]: E0209 09:47:56.463959 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:56.619704 systemd-networkd[1533]: cilium_vxlan: Gained IPv6LL Feb 9 09:47:57.021950 kubelet[2157]: I0209 09:47:57.021870 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:47:57.032503 systemd[1]: Created slice kubepods-besteffort-podd376cb03_0b4b_4c3a_8d39_d56f9f623f7b.slice. Feb 9 09:47:57.105229 kubelet[2157]: I0209 09:47:57.105175 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdvr\" (UniqueName: \"kubernetes.io/projected/d376cb03-0b4b-4c3a-8d39-d56f9f623f7b-kube-api-access-gxdvr\") pod \"nginx-deployment-8ffc5cf85-cmrp4\" (UID: \"d376cb03-0b4b-4c3a-8d39-d56f9f623f7b\") " pod="default/nginx-deployment-8ffc5cf85-cmrp4" Feb 9 09:47:57.176512 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:47:57.181803 systemd-networkd[1533]: lxc_health: Link UP Feb 9 09:47:57.182462 systemd-networkd[1533]: lxc_health: Gained carrier Feb 9 09:47:57.338292 env[1733]: time="2024-02-09T09:47:57.338219477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-cmrp4,Uid:d376cb03-0b4b-4c3a-8d39-d56f9f623f7b,Namespace:default,Attempt:0,}" Feb 9 09:47:57.465100 kubelet[2157]: E0209 09:47:57.465006 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:57.913726 systemd-networkd[1533]: lxcbd7de71ed9cd: Link UP Feb 9 09:47:57.922522 kernel: eth0: renamed from tmp56084 Feb 9 09:47:57.930235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbd7de71ed9cd: link becomes ready Feb 9 09:47:57.929148 systemd-networkd[1533]: lxcbd7de71ed9cd: Gained carrier Feb 9 09:47:58.465933 kubelet[2157]: E0209 09:47:58.465886 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:58.988307 systemd-networkd[1533]: lxc_health: Gained IPv6LL Feb 9 09:47:59.433181 kubelet[2157]: E0209 09:47:59.433113 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:59.467696 kubelet[2157]: E0209 09:47:59.467650 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:59.500242 systemd-networkd[1533]: lxcbd7de71ed9cd: Gained IPv6LL Feb 9 09:48:00.468452 kubelet[2157]: E0209 09:48:00.468408 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:01.469221 kubelet[2157]: E0209 09:48:01.469174 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:02.470617 kubelet[2157]: E0209 09:48:02.470571 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:03.471834 kubelet[2157]: E0209 09:48:03.471784 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:04.206887 amazon-ssm-agent[1707]: 2024-02-09 09:48:04 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:48:04.473711 kubelet[2157]: E0209 09:48:04.473288 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:05.475011 kubelet[2157]: E0209 09:48:05.474943 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:05.899585 env[1733]: time="2024-02-09T09:48:05.899425876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:05.900274 env[1733]: time="2024-02-09T09:48:05.899537527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:05.900274 env[1733]: time="2024-02-09T09:48:05.899564804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:05.900274 env[1733]: time="2024-02-09T09:48:05.899918899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56084ceb20490e3d85715de57a755809bbee4f47d47b2355693537a9bd854b3a pid=3481 runtime=io.containerd.runc.v2 Feb 9 09:48:05.927421 systemd[1]: Started cri-containerd-56084ceb20490e3d85715de57a755809bbee4f47d47b2355693537a9bd854b3a.scope. Feb 9 09:48:06.007007 env[1733]: time="2024-02-09T09:48:06.006952828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-cmrp4,Uid:d376cb03-0b4b-4c3a-8d39-d56f9f623f7b,Namespace:default,Attempt:0,} returns sandbox id \"56084ceb20490e3d85715de57a755809bbee4f47d47b2355693537a9bd854b3a\"" Feb 9 09:48:06.009737 env[1733]: time="2024-02-09T09:48:06.009661076Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:48:06.475136 kubelet[2157]: E0209 09:48:06.475068 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:07.476049 kubelet[2157]: E0209 09:48:07.475955 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:08.476984 kubelet[2157]: E0209 09:48:08.476940 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:09.478364 kubelet[2157]: E0209 09:48:09.478299 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:09.853342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928373381.mount: Deactivated successfully. Feb 9 09:48:10.479019 kubelet[2157]: E0209 09:48:10.478949 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:11.350810 env[1733]: time="2024-02-09T09:48:11.350748750Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.353372 env[1733]: time="2024-02-09T09:48:11.353323748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.356159 env[1733]: time="2024-02-09T09:48:11.356112663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.360322 env[1733]: time="2024-02-09T09:48:11.360266947Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:48:11.361619 env[1733]: time="2024-02-09T09:48:11.361568042Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:11.365193 env[1733]: time="2024-02-09T09:48:11.365122215Z" level=info msg="CreateContainer within sandbox \"56084ceb20490e3d85715de57a755809bbee4f47d47b2355693537a9bd854b3a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 09:48:11.382658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660856566.mount: Deactivated successfully. Feb 9 09:48:11.393450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450508551.mount: Deactivated successfully. Feb 9 09:48:11.397952 env[1733]: time="2024-02-09T09:48:11.397886783Z" level=info msg="CreateContainer within sandbox \"56084ceb20490e3d85715de57a755809bbee4f47d47b2355693537a9bd854b3a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2639fcdb72787881c7863abd048cd2e860d2f24444f01813cd12836e34fce365\"" Feb 9 09:48:11.399493 env[1733]: time="2024-02-09T09:48:11.399414192Z" level=info msg="StartContainer for \"2639fcdb72787881c7863abd048cd2e860d2f24444f01813cd12836e34fce365\"" Feb 9 09:48:11.433711 systemd[1]: Started cri-containerd-2639fcdb72787881c7863abd048cd2e860d2f24444f01813cd12836e34fce365.scope. Feb 9 09:48:11.479446 kubelet[2157]: E0209 09:48:11.479381 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:11.491999 env[1733]: time="2024-02-09T09:48:11.491937807Z" level=info msg="StartContainer for \"2639fcdb72787881c7863abd048cd2e860d2f24444f01813cd12836e34fce365\" returns successfully" Feb 9 09:48:11.886801 kubelet[2157]: I0209 09:48:11.886759 2157 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-cmrp4" podStartSLOduration=-9.22337202196809e+09 pod.CreationTimestamp="2024-02-09 09:47:57 +0000 UTC" firstStartedPulling="2024-02-09 09:48:06.009051423 +0000 UTC m=+48.843986216" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:11.886131682 +0000 UTC m=+54.721066499" watchObservedRunningTime="2024-02-09 09:48:11.886685724 +0000 UTC m=+54.721620553" Feb 9 09:48:12.480762 kubelet[2157]: E0209 09:48:12.480721 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:13.481824 kubelet[2157]: E0209 09:48:13.481753 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:14.482578 kubelet[2157]: E0209 09:48:14.482538 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:15.483325 kubelet[2157]: E0209 09:48:15.483279 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:16.189048 kubelet[2157]: I0209 09:48:16.188984 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:16.200015 systemd[1]: Created slice kubepods-besteffort-pod70c9fce8_2886_416f_a419_a07b04b47a9d.slice. Feb 9 09:48:16.320557 kubelet[2157]: I0209 09:48:16.320459 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wxwv\" (UniqueName: \"kubernetes.io/projected/70c9fce8-2886-416f-a419-a07b04b47a9d-kube-api-access-9wxwv\") pod \"nfs-server-provisioner-0\" (UID: \"70c9fce8-2886-416f-a419-a07b04b47a9d\") " pod="default/nfs-server-provisioner-0" Feb 9 09:48:16.320755 kubelet[2157]: I0209 09:48:16.320573 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/70c9fce8-2886-416f-a419-a07b04b47a9d-data\") pod \"nfs-server-provisioner-0\" (UID: \"70c9fce8-2886-416f-a419-a07b04b47a9d\") " pod="default/nfs-server-provisioner-0" Feb 9 09:48:16.484788 kubelet[2157]: E0209 09:48:16.484645 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:16.510806 env[1733]: time="2024-02-09T09:48:16.510738211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:70c9fce8-2886-416f-a419-a07b04b47a9d,Namespace:default,Attempt:0,}" Feb 9 09:48:16.563945 systemd-networkd[1533]: lxc451e1b3edfac: Link UP Feb 9 09:48:16.568947 (udev-worker)[3599]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:16.576530 kernel: eth0: renamed from tmpd99ac Feb 9 09:48:16.588244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:48:16.588362 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc451e1b3edfac: link becomes ready Feb 9 09:48:16.588636 systemd-networkd[1533]: lxc451e1b3edfac: Gained carrier Feb 9 09:48:16.992637 env[1733]: time="2024-02-09T09:48:16.992436760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:16.992830 env[1733]: time="2024-02-09T09:48:16.992692101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:16.992830 env[1733]: time="2024-02-09T09:48:16.992784611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:16.993433 env[1733]: time="2024-02-09T09:48:16.993304102Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d99ac8c54b3519faefed28f8d3d496e701c761d903066bb1aa406d09c41c1c74 pid=3654 runtime=io.containerd.runc.v2 Feb 9 09:48:17.027293 systemd[1]: Started cri-containerd-d99ac8c54b3519faefed28f8d3d496e701c761d903066bb1aa406d09c41c1c74.scope. Feb 9 09:48:17.098174 env[1733]: time="2024-02-09T09:48:17.098120221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:70c9fce8-2886-416f-a419-a07b04b47a9d,Namespace:default,Attempt:0,} returns sandbox id \"d99ac8c54b3519faefed28f8d3d496e701c761d903066bb1aa406d09c41c1c74\"" Feb 9 09:48:17.100816 env[1733]: time="2024-02-09T09:48:17.100766515Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 09:48:17.442673 systemd[1]: run-containerd-runc-k8s.io-d99ac8c54b3519faefed28f8d3d496e701c761d903066bb1aa406d09c41c1c74-runc.axZyGJ.mount: Deactivated successfully. Feb 9 09:48:17.485684 kubelet[2157]: E0209 09:48:17.485629 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:18.379812 systemd-networkd[1533]: lxc451e1b3edfac: Gained IPv6LL Feb 9 09:48:18.486845 kubelet[2157]: E0209 09:48:18.486779 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:19.433776 kubelet[2157]: E0209 09:48:19.433714 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:19.487921 kubelet[2157]: E0209 09:48:19.487862 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:20.323046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812601365.mount: Deactivated successfully. Feb 9 09:48:20.488635 kubelet[2157]: E0209 09:48:20.488547 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:21.489305 kubelet[2157]: E0209 09:48:21.489240 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:22.489697 kubelet[2157]: E0209 09:48:22.489629 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:23.490717 kubelet[2157]: E0209 09:48:23.490571 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:24.160146 env[1733]: time="2024-02-09T09:48:24.160041565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.163661 env[1733]: time="2024-02-09T09:48:24.163592224Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.167448 env[1733]: time="2024-02-09T09:48:24.167387736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.172588 env[1733]: time="2024-02-09T09:48:24.172534483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:24.173605 env[1733]: time="2024-02-09T09:48:24.173541421Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 09:48:24.178109 env[1733]: time="2024-02-09T09:48:24.177952748Z" level=info msg="CreateContainer within sandbox \"d99ac8c54b3519faefed28f8d3d496e701c761d903066bb1aa406d09c41c1c74\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 09:48:24.195870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3544096903.mount: Deactivated successfully. Feb 9 09:48:24.212017 env[1733]: time="2024-02-09T09:48:24.211952018Z" level=info msg="CreateContainer within sandbox \"d99ac8c54b3519faefed28f8d3d496e701c761d903066bb1aa406d09c41c1c74\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fd1890b29c6b501cf172546e9bfca8acb7ed9cc5b88b7d58a66d99b036ccb943\"" Feb 9 09:48:24.213167 env[1733]: time="2024-02-09T09:48:24.213105671Z" level=info msg="StartContainer for \"fd1890b29c6b501cf172546e9bfca8acb7ed9cc5b88b7d58a66d99b036ccb943\"" Feb 9 09:48:24.254808 systemd[1]: Started cri-containerd-fd1890b29c6b501cf172546e9bfca8acb7ed9cc5b88b7d58a66d99b036ccb943.scope. Feb 9 09:48:24.311000 env[1733]: time="2024-02-09T09:48:24.310935998Z" level=info msg="StartContainer for \"fd1890b29c6b501cf172546e9bfca8acb7ed9cc5b88b7d58a66d99b036ccb943\" returns successfully" Feb 9 09:48:24.491533 kubelet[2157]: E0209 09:48:24.491354 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:24.930697 kubelet[2157]: I0209 09:48:24.930631 2157 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372027924217e+09 pod.CreationTimestamp="2024-02-09 09:48:16 +0000 UTC" firstStartedPulling="2024-02-09 09:48:17.100149043 +0000 UTC m=+59.935083848" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:24.929256211 +0000 UTC m=+67.764191052" watchObservedRunningTime="2024-02-09 09:48:24.930559482 +0000 UTC m=+67.765494299" Feb 9 09:48:25.191709 systemd[1]: run-containerd-runc-k8s.io-fd1890b29c6b501cf172546e9bfca8acb7ed9cc5b88b7d58a66d99b036ccb943-runc.JiUA4U.mount: Deactivated successfully. Feb 9 09:48:25.492212 kubelet[2157]: E0209 09:48:25.492067 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:26.492568 kubelet[2157]: E0209 09:48:26.492493 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:27.493190 kubelet[2157]: E0209 09:48:27.493137 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:28.494415 kubelet[2157]: E0209 09:48:28.494345 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:29.495450 kubelet[2157]: E0209 09:48:29.495378 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:30.496073 kubelet[2157]: E0209 09:48:30.495967 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:31.496885 kubelet[2157]: E0209 09:48:31.496841 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:32.498604 kubelet[2157]: E0209 09:48:32.498536 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:33.498840 kubelet[2157]: E0209 09:48:33.498799 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:34.030234 kubelet[2157]: I0209 09:48:34.030186 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:34.039176 systemd[1]: Created slice kubepods-besteffort-pod7eb1f4ad_ec3e_46c0_9341_15c518b36df9.slice. Feb 9 09:48:34.125315 kubelet[2157]: I0209 09:48:34.125258 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfmw\" (UniqueName: \"kubernetes.io/projected/7eb1f4ad-ec3e-46c0-9341-15c518b36df9-kube-api-access-6pfmw\") pod \"test-pod-1\" (UID: \"7eb1f4ad-ec3e-46c0-9341-15c518b36df9\") " pod="default/test-pod-1" Feb 9 09:48:34.125531 kubelet[2157]: I0209 09:48:34.125360 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c974a02f-9cf3-4e9f-a786-cced32576ee9\" (UniqueName: \"kubernetes.io/nfs/7eb1f4ad-ec3e-46c0-9341-15c518b36df9-pvc-c974a02f-9cf3-4e9f-a786-cced32576ee9\") pod \"test-pod-1\" (UID: \"7eb1f4ad-ec3e-46c0-9341-15c518b36df9\") " pod="default/test-pod-1" Feb 9 09:48:34.271522 kernel: FS-Cache: Loaded Feb 9 09:48:34.314577 kernel: RPC: Registered named UNIX socket transport module. Feb 9 09:48:34.316537 kernel: RPC: Registered udp transport module. Feb 9 09:48:34.316599 kernel: RPC: Registered tcp transport module. Feb 9 09:48:34.316651 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 09:48:34.372522 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 09:48:34.499595 kubelet[2157]: E0209 09:48:34.499532 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:34.632173 kernel: NFS: Registering the id_resolver key type Feb 9 09:48:34.632433 kernel: Key type id_resolver registered Feb 9 09:48:34.634059 kernel: Key type id_legacy registered Feb 9 09:48:34.673725 nfsidmap[3801]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 09:48:34.679181 nfsidmap[3802]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 09:48:34.945970 env[1733]: time="2024-02-09T09:48:34.945815637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7eb1f4ad-ec3e-46c0-9341-15c518b36df9,Namespace:default,Attempt:0,}" Feb 9 09:48:34.997514 (udev-worker)[3792]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:34.998766 (udev-worker)[3795]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:34.999237 systemd-networkd[1533]: lxcafe64a10605a: Link UP Feb 9 09:48:35.010510 kernel: eth0: renamed from tmp5a9ce Feb 9 09:48:35.022098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:48:35.022241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcafe64a10605a: link becomes ready Feb 9 09:48:35.023666 systemd-networkd[1533]: lxcafe64a10605a: Gained carrier Feb 9 09:48:35.428660 env[1733]: time="2024-02-09T09:48:35.428525573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:35.428962 env[1733]: time="2024-02-09T09:48:35.428611254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:35.428962 env[1733]: time="2024-02-09T09:48:35.428639298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:35.429194 env[1733]: time="2024-02-09T09:48:35.428997348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a9ce8b3f194562b59f37e8072d69545614dd829d389d34a52a6908adba56671 pid=3828 runtime=io.containerd.runc.v2 Feb 9 09:48:35.471675 systemd[1]: run-containerd-runc-k8s.io-5a9ce8b3f194562b59f37e8072d69545614dd829d389d34a52a6908adba56671-runc.9pmoio.mount: Deactivated successfully. Feb 9 09:48:35.475087 systemd[1]: Started cri-containerd-5a9ce8b3f194562b59f37e8072d69545614dd829d389d34a52a6908adba56671.scope. Feb 9 09:48:35.504065 kubelet[2157]: E0209 09:48:35.504012 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:35.547652 env[1733]: time="2024-02-09T09:48:35.547582348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7eb1f4ad-ec3e-46c0-9341-15c518b36df9,Namespace:default,Attempt:0,} returns sandbox id \"5a9ce8b3f194562b59f37e8072d69545614dd829d389d34a52a6908adba56671\"" Feb 9 09:48:35.550974 env[1733]: time="2024-02-09T09:48:35.550911224Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:48:36.006564 env[1733]: time="2024-02-09T09:48:36.006508820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.009203 env[1733]: time="2024-02-09T09:48:36.009154093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.011894 env[1733]: time="2024-02-09T09:48:36.011849466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.014771 env[1733]: time="2024-02-09T09:48:36.014725058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:36.016143 env[1733]: time="2024-02-09T09:48:36.016081511Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:48:36.019729 env[1733]: time="2024-02-09T09:48:36.019675038Z" level=info msg="CreateContainer within sandbox \"5a9ce8b3f194562b59f37e8072d69545614dd829d389d34a52a6908adba56671\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 09:48:36.044601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717548805.mount: Deactivated successfully. Feb 9 09:48:36.052291 env[1733]: time="2024-02-09T09:48:36.052209443Z" level=info msg="CreateContainer within sandbox \"5a9ce8b3f194562b59f37e8072d69545614dd829d389d34a52a6908adba56671\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2dfbe9daaa52a53b0f27682a60fdb5eebb2c3cbedf82de20abcb68a44478c10c\"" Feb 9 09:48:36.053425 env[1733]: time="2024-02-09T09:48:36.053361292Z" level=info msg="StartContainer for \"2dfbe9daaa52a53b0f27682a60fdb5eebb2c3cbedf82de20abcb68a44478c10c\"" Feb 9 09:48:36.083517 systemd[1]: Started cri-containerd-2dfbe9daaa52a53b0f27682a60fdb5eebb2c3cbedf82de20abcb68a44478c10c.scope. Feb 9 09:48:36.146967 env[1733]: time="2024-02-09T09:48:36.146878157Z" level=info msg="StartContainer for \"2dfbe9daaa52a53b0f27682a60fdb5eebb2c3cbedf82de20abcb68a44478c10c\" returns successfully" Feb 9 09:48:36.505369 kubelet[2157]: E0209 09:48:36.505311 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:36.747932 systemd-networkd[1533]: lxcafe64a10605a: Gained IPv6LL Feb 9 09:48:36.959555 kubelet[2157]: I0209 09:48:36.959502 2157 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372015895348e+09 pod.CreationTimestamp="2024-02-09 09:48:16 +0000 UTC" firstStartedPulling="2024-02-09 09:48:35.550040142 +0000 UTC m=+78.384974947" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:36.959140924 +0000 UTC m=+79.794075765" watchObservedRunningTime="2024-02-09 09:48:36.95942798 +0000 UTC m=+79.794362785" Feb 9 09:48:37.506286 kubelet[2157]: E0209 09:48:37.506229 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:38.506603 kubelet[2157]: E0209 09:48:38.506542 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:39.433553 kubelet[2157]: E0209 09:48:39.433502 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:39.507036 kubelet[2157]: E0209 09:48:39.506991 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:40.508064 kubelet[2157]: E0209 09:48:40.507994 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:41.508224 kubelet[2157]: E0209 09:48:41.508158 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:41.741581 env[1733]: time="2024-02-09T09:48:41.741502023Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:48:41.751159 env[1733]: time="2024-02-09T09:48:41.751097741Z" level=info msg="StopContainer for \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\" with timeout 1 (s)" Feb 9 09:48:41.751937 env[1733]: time="2024-02-09T09:48:41.751884420Z" level=info msg="Stop container \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\" with signal terminated" Feb 9 09:48:41.764339 systemd-networkd[1533]: lxc_health: Link DOWN Feb 9 09:48:41.764359 systemd-networkd[1533]: lxc_health: Lost carrier Feb 9 09:48:41.796707 systemd[1]: cri-containerd-1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65.scope: Deactivated successfully. Feb 9 09:48:41.797283 systemd[1]: cri-containerd-1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65.scope: Consumed 14.291s CPU time. Feb 9 09:48:41.834109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65-rootfs.mount: Deactivated successfully. Feb 9 09:48:42.130922 env[1733]: time="2024-02-09T09:48:42.130860147Z" level=info msg="shim disconnected" id=1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65 Feb 9 09:48:42.131274 env[1733]: time="2024-02-09T09:48:42.131242194Z" level=warning msg="cleaning up after shim disconnected" id=1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65 namespace=k8s.io Feb 9 09:48:42.131429 env[1733]: time="2024-02-09T09:48:42.131401232Z" level=info msg="cleaning up dead shim" Feb 9 09:48:42.146746 env[1733]: time="2024-02-09T09:48:42.146690271Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3958 runtime=io.containerd.runc.v2\n" Feb 9 09:48:42.150457 env[1733]: time="2024-02-09T09:48:42.150400068Z" level=info msg="StopContainer for \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\" returns successfully" Feb 9 09:48:42.151895 env[1733]: time="2024-02-09T09:48:42.151842890Z" level=info msg="StopPodSandbox for \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\"" Feb 9 09:48:42.152060 env[1733]: time="2024-02-09T09:48:42.151944002Z" level=info msg="Container to stop \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:42.152060 env[1733]: time="2024-02-09T09:48:42.151977063Z" level=info msg="Container to stop \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:42.152060 env[1733]: time="2024-02-09T09:48:42.152005011Z" level=info msg="Container to stop \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:42.152060 env[1733]: time="2024-02-09T09:48:42.152034087Z" level=info msg="Container to stop \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:42.157169 env[1733]: time="2024-02-09T09:48:42.152060092Z" level=info msg="Container to stop \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:42.156081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2-shm.mount: Deactivated successfully. Feb 9 09:48:42.167876 systemd[1]: cri-containerd-721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2.scope: Deactivated successfully. Feb 9 09:48:42.202529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2-rootfs.mount: Deactivated successfully. Feb 9 09:48:42.212924 env[1733]: time="2024-02-09T09:48:42.212858368Z" level=info msg="shim disconnected" id=721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2 Feb 9 09:48:42.213375 env[1733]: time="2024-02-09T09:48:42.213317096Z" level=warning msg="cleaning up after shim disconnected" id=721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2 namespace=k8s.io Feb 9 09:48:42.213615 env[1733]: time="2024-02-09T09:48:42.213585191Z" level=info msg="cleaning up dead shim" Feb 9 09:48:42.227032 env[1733]: time="2024-02-09T09:48:42.226975416Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3991 runtime=io.containerd.runc.v2\n" Feb 9 09:48:42.227816 env[1733]: time="2024-02-09T09:48:42.227772739Z" level=info msg="TearDown network for sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" successfully" Feb 9 09:48:42.227975 env[1733]: time="2024-02-09T09:48:42.227940993Z" level=info msg="StopPodSandbox for \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" returns successfully" Feb 9 09:48:42.274280 kubelet[2157]: I0209 09:48:42.272557 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-bpf-maps\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274280 kubelet[2157]: I0209 09:48:42.272635 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0af1f70-c382-40a3-b10d-f69950757c72-clustermesh-secrets\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274280 kubelet[2157]: I0209 09:48:42.272641 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.274280 kubelet[2157]: I0209 09:48:42.272680 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-etc-cni-netd\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274280 kubelet[2157]: I0209 09:48:42.272730 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-cgroup\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274280 kubelet[2157]: I0209 09:48:42.272775 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-xtables-lock\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274806 kubelet[2157]: I0209 09:48:42.272835 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-hubble-tls\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274806 kubelet[2157]: I0209 09:48:42.272879 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-run\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274806 kubelet[2157]: I0209 09:48:42.272918 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-hostproc\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274806 kubelet[2157]: I0209 09:48:42.272955 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cni-path\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274806 kubelet[2157]: I0209 09:48:42.273005 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpv4t\" (UniqueName: \"kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-kube-api-access-hpv4t\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.274806 kubelet[2157]: I0209 09:48:42.273045 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-net\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.275150 kubelet[2157]: I0209 09:48:42.273083 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-lib-modules\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.275150 kubelet[2157]: I0209 09:48:42.273128 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-kernel\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.275150 kubelet[2157]: I0209 09:48:42.273175 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-config-path\") pod \"b0af1f70-c382-40a3-b10d-f69950757c72\" (UID: \"b0af1f70-c382-40a3-b10d-f69950757c72\") " Feb 9 09:48:42.275150 kubelet[2157]: I0209 09:48:42.273226 2157 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-bpf-maps\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.275150 kubelet[2157]: I0209 09:48:42.273447 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.275150 kubelet[2157]: W0209 09:48:42.273500 2157 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b0af1f70-c382-40a3-b10d-f69950757c72/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:42.275513 kubelet[2157]: I0209 09:48:42.273552 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.275513 kubelet[2157]: I0209 09:48:42.273642 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.275513 kubelet[2157]: I0209 09:48:42.273712 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.278419 kubelet[2157]: I0209 09:48:42.278348 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:42.278629 kubelet[2157]: I0209 09:48:42.278489 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.278806 kubelet[2157]: I0209 09:48:42.278762 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.279204 kubelet[2157]: I0209 09:48:42.279156 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.279417 kubelet[2157]: I0209 09:48:42.279374 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.279654 kubelet[2157]: I0209 09:48:42.279572 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:42.283570 kubelet[2157]: I0209 09:48:42.283518 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-kube-api-access-hpv4t" (OuterVolumeSpecName: "kube-api-access-hpv4t") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "kube-api-access-hpv4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:42.285201 kubelet[2157]: I0209 09:48:42.285149 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0af1f70-c382-40a3-b10d-f69950757c72-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:42.287548 kubelet[2157]: I0209 09:48:42.287461 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0af1f70-c382-40a3-b10d-f69950757c72" (UID: "b0af1f70-c382-40a3-b10d-f69950757c72"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:42.373947 kubelet[2157]: I0209 09:48:42.373908 2157 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0af1f70-c382-40a3-b10d-f69950757c72-clustermesh-secrets\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.374175 kubelet[2157]: I0209 09:48:42.374154 2157 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-etc-cni-netd\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.374299 kubelet[2157]: I0209 09:48:42.374279 2157 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-hubble-tls\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.374433 kubelet[2157]: I0209 09:48:42.374414 2157 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-run\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.374604 kubelet[2157]: I0209 09:48:42.374584 2157 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-hostproc\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.374727 kubelet[2157]: I0209 09:48:42.374708 2157 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cni-path\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.374842 kubelet[2157]: I0209 09:48:42.374823 2157 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-cgroup\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.374963 kubelet[2157]: I0209 09:48:42.374944 2157 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-xtables-lock\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.375085 kubelet[2157]: I0209 09:48:42.375065 2157 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-hpv4t\" (UniqueName: \"kubernetes.io/projected/b0af1f70-c382-40a3-b10d-f69950757c72-kube-api-access-hpv4t\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.375211 kubelet[2157]: I0209 09:48:42.375192 2157 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-kernel\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.375327 kubelet[2157]: I0209 09:48:42.375307 2157 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0af1f70-c382-40a3-b10d-f69950757c72-cilium-config-path\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.375441 kubelet[2157]: I0209 09:48:42.375422 2157 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-host-proc-sys-net\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.375591 kubelet[2157]: I0209 09:48:42.375571 2157 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0af1f70-c382-40a3-b10d-f69950757c72-lib-modules\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:42.509754 kubelet[2157]: E0209 09:48:42.509596 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:42.707162 systemd[1]: var-lib-kubelet-pods-b0af1f70\x2dc382\x2d40a3\x2db10d\x2df69950757c72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpv4t.mount: Deactivated successfully. Feb 9 09:48:42.707347 systemd[1]: var-lib-kubelet-pods-b0af1f70\x2dc382\x2d40a3\x2db10d\x2df69950757c72-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:42.707503 systemd[1]: var-lib-kubelet-pods-b0af1f70\x2dc382\x2d40a3\x2db10d\x2df69950757c72-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:48:42.965031 kubelet[2157]: I0209 09:48:42.964990 2157 scope.go:115] "RemoveContainer" containerID="1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65" Feb 9 09:48:42.968167 env[1733]: time="2024-02-09T09:48:42.968107742Z" level=info msg="RemoveContainer for \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\"" Feb 9 09:48:42.974121 env[1733]: time="2024-02-09T09:48:42.974051864Z" level=info msg="RemoveContainer for \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\" returns successfully" Feb 9 09:48:42.974580 kubelet[2157]: I0209 09:48:42.974551 2157 scope.go:115] "RemoveContainer" containerID="ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3" Feb 9 09:48:42.977419 env[1733]: time="2024-02-09T09:48:42.976782393Z" level=info msg="RemoveContainer for \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\"" Feb 9 09:48:42.982615 systemd[1]: Removed slice kubepods-burstable-podb0af1f70_c382_40a3_b10d_f69950757c72.slice. Feb 9 09:48:42.982831 systemd[1]: kubepods-burstable-podb0af1f70_c382_40a3_b10d_f69950757c72.slice: Consumed 14.494s CPU time. Feb 9 09:48:42.984343 env[1733]: time="2024-02-09T09:48:42.984288965Z" level=info msg="RemoveContainer for \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\" returns successfully" Feb 9 09:48:42.984811 kubelet[2157]: I0209 09:48:42.984776 2157 scope.go:115] "RemoveContainer" containerID="acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05" Feb 9 09:48:42.986454 env[1733]: time="2024-02-09T09:48:42.986408749Z" level=info msg="RemoveContainer for \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\"" Feb 9 09:48:42.991119 env[1733]: time="2024-02-09T09:48:42.991066267Z" level=info msg="RemoveContainer for \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\" returns successfully" Feb 9 09:48:42.991766 kubelet[2157]: I0209 09:48:42.991714 2157 scope.go:115] "RemoveContainer" containerID="a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf" Feb 9 09:48:42.993451 env[1733]: time="2024-02-09T09:48:42.993403972Z" level=info msg="RemoveContainer for \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\"" Feb 9 09:48:42.997716 env[1733]: time="2024-02-09T09:48:42.997662379Z" level=info msg="RemoveContainer for \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\" returns successfully" Feb 9 09:48:42.999102 kubelet[2157]: I0209 09:48:42.999059 2157 scope.go:115] "RemoveContainer" containerID="c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71" Feb 9 09:48:43.001608 env[1733]: time="2024-02-09T09:48:43.001556258Z" level=info msg="RemoveContainer for \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\"" Feb 9 09:48:43.006381 env[1733]: time="2024-02-09T09:48:43.006323398Z" level=info msg="RemoveContainer for \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\" returns successfully" Feb 9 09:48:43.007082 kubelet[2157]: I0209 09:48:43.006988 2157 scope.go:115] "RemoveContainer" containerID="1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65" Feb 9 09:48:43.007565 env[1733]: time="2024-02-09T09:48:43.007419320Z" level=error msg="ContainerStatus for \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\": not found" Feb 9 09:48:43.007942 kubelet[2157]: E0209 09:48:43.007910 2157 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\": not found" containerID="1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65" Feb 9 09:48:43.008043 kubelet[2157]: I0209 09:48:43.007978 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65} err="failed to get container status \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fe18a2c1b699cb75cc720fda0ea6525e20e586c90f02e9a074c367703db4b65\": not found" Feb 9 09:48:43.008043 kubelet[2157]: I0209 09:48:43.008002 2157 scope.go:115] "RemoveContainer" containerID="ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3" Feb 9 09:48:43.008450 env[1733]: time="2024-02-09T09:48:43.008370557Z" level=error msg="ContainerStatus for \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\": not found" Feb 9 09:48:43.008806 kubelet[2157]: E0209 09:48:43.008777 2157 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\": not found" containerID="ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3" Feb 9 09:48:43.008924 kubelet[2157]: I0209 09:48:43.008847 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3} err="failed to get container status \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee4ee2eba1f581d92334d4242240ff8948ba7ad5e028b375a8a5e2420891f2e3\": not found" Feb 9 09:48:43.008924 kubelet[2157]: I0209 09:48:43.008875 2157 scope.go:115] "RemoveContainer" containerID="acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05" Feb 9 09:48:43.009322 env[1733]: time="2024-02-09T09:48:43.009245593Z" level=error msg="ContainerStatus for \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\": not found" Feb 9 09:48:43.009701 kubelet[2157]: E0209 09:48:43.009671 2157 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\": not found" containerID="acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05" Feb 9 09:48:43.009811 kubelet[2157]: I0209 09:48:43.009728 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05} err="failed to get container status \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\": rpc error: code = NotFound desc = an error occurred when try to find container \"acf4f6162d7b535b5a0b228fafd696bc76071cc5dd4294109863c03f69f8fc05\": not found" Feb 9 09:48:43.009811 kubelet[2157]: I0209 09:48:43.009751 2157 scope.go:115] "RemoveContainer" containerID="a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf" Feb 9 09:48:43.010234 env[1733]: time="2024-02-09T09:48:43.010157518Z" level=error msg="ContainerStatus for \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\": not found" Feb 9 09:48:43.012455 kubelet[2157]: E0209 09:48:43.012384 2157 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\": not found" containerID="a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf" Feb 9 09:48:43.012455 kubelet[2157]: I0209 09:48:43.012461 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf} err="failed to get container status \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\": rpc error: code = NotFound desc = an error occurred when try to find container \"a743fea9a9aefe3c7378d9f835a27f7616f3938782c445b3db178e9bc69b6faf\": not found" Feb 9 09:48:43.012749 kubelet[2157]: I0209 09:48:43.012513 2157 scope.go:115] "RemoveContainer" containerID="c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71" Feb 9 09:48:43.012995 env[1733]: time="2024-02-09T09:48:43.012879707Z" level=error msg="ContainerStatus for \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\": not found" Feb 9 09:48:43.013290 kubelet[2157]: E0209 09:48:43.013225 2157 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\": not found" containerID="c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71" Feb 9 09:48:43.013389 kubelet[2157]: I0209 09:48:43.013307 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71} err="failed to get container status \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\": rpc error: code = NotFound desc = an error occurred when try to find container \"c29e911afdd716553d9612be909fdaa9207ad24e8ef5a2c6251ec0c44acb0a71\": not found" Feb 9 09:48:43.510379 kubelet[2157]: E0209 09:48:43.510306 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:43.740859 kubelet[2157]: I0209 09:48:43.740804 2157 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b0af1f70-c382-40a3-b10d-f69950757c72 path="/var/lib/kubelet/pods/b0af1f70-c382-40a3-b10d-f69950757c72/volumes" Feb 9 09:48:44.511450 kubelet[2157]: E0209 09:48:44.511405 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:44.625153 kubelet[2157]: E0209 09:48:44.625120 2157 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:45.513243 kubelet[2157]: E0209 09:48:45.513177 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:46.513537 kubelet[2157]: E0209 09:48:46.513453 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:47.124869 kubelet[2157]: I0209 09:48:47.124803 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:47.125146 kubelet[2157]: E0209 09:48:47.125125 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0af1f70-c382-40a3-b10d-f69950757c72" containerName="apply-sysctl-overwrites" Feb 9 09:48:47.125281 kubelet[2157]: E0209 09:48:47.125260 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0af1f70-c382-40a3-b10d-f69950757c72" containerName="clean-cilium-state" Feb 9 09:48:47.125434 kubelet[2157]: E0209 09:48:47.125411 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0af1f70-c382-40a3-b10d-f69950757c72" containerName="cilium-agent" Feb 9 09:48:47.125617 kubelet[2157]: E0209 09:48:47.125593 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0af1f70-c382-40a3-b10d-f69950757c72" containerName="mount-cgroup" Feb 9 09:48:47.125763 kubelet[2157]: E0209 09:48:47.125741 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0af1f70-c382-40a3-b10d-f69950757c72" containerName="mount-bpf-fs" Feb 9 09:48:47.125953 kubelet[2157]: I0209 09:48:47.125916 2157 memory_manager.go:346] "RemoveStaleState removing state" podUID="b0af1f70-c382-40a3-b10d-f69950757c72" containerName="cilium-agent" Feb 9 09:48:47.135929 systemd[1]: Created slice kubepods-besteffort-pod885d7963_ce77_45bb_8cb6_a297e9d0cdf4.slice. Feb 9 09:48:47.190983 kubelet[2157]: I0209 09:48:47.190936 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:47.201085 systemd[1]: Created slice kubepods-burstable-pode35e1835_5376_43c6_a7ef_3445f5d424b1.slice. Feb 9 09:48:47.203788 kubelet[2157]: I0209 09:48:47.203735 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/885d7963-ce77-45bb-8cb6-a297e9d0cdf4-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-wdchv\" (UID: \"885d7963-ce77-45bb-8cb6-a297e9d0cdf4\") " pod="kube-system/cilium-operator-f59cbd8c6-wdchv" Feb 9 09:48:47.204061 kubelet[2157]: I0209 09:48:47.204028 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9k5\" (UniqueName: \"kubernetes.io/projected/885d7963-ce77-45bb-8cb6-a297e9d0cdf4-kube-api-access-wh9k5\") pod \"cilium-operator-f59cbd8c6-wdchv\" (UID: \"885d7963-ce77-45bb-8cb6-a297e9d0cdf4\") " pod="kube-system/cilium-operator-f59cbd8c6-wdchv" Feb 9 09:48:47.305276 kubelet[2157]: I0209 09:48:47.305238 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-etc-cni-netd\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.305652 kubelet[2157]: I0209 09:48:47.305624 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-ipsec-secrets\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.305862 kubelet[2157]: I0209 09:48:47.305823 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-net\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.306077 kubelet[2157]: I0209 09:48:47.306038 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-hubble-tls\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.306289 kubelet[2157]: I0209 09:48:47.306251 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-bpf-maps\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.306490 kubelet[2157]: I0209 09:48:47.306447 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-run\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.306659 kubelet[2157]: I0209 09:48:47.306626 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-hostproc\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.306839 kubelet[2157]: I0209 09:48:47.306800 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-xtables-lock\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.307035 kubelet[2157]: I0209 09:48:47.307004 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-config-path\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.307258 kubelet[2157]: I0209 09:48:47.307238 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4j4\" (UniqueName: \"kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-kube-api-access-4v4j4\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.307441 kubelet[2157]: I0209 09:48:47.307407 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-lib-modules\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.307799 kubelet[2157]: I0209 09:48:47.307776 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-kernel\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.308029 kubelet[2157]: I0209 09:48:47.308009 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-cgroup\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.308194 kubelet[2157]: I0209 09:48:47.308173 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cni-path\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.308386 kubelet[2157]: I0209 09:48:47.308364 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-clustermesh-secrets\") pod \"cilium-xhpv5\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " pod="kube-system/cilium-xhpv5" Feb 9 09:48:47.448535 env[1733]: time="2024-02-09T09:48:47.440776672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-wdchv,Uid:885d7963-ce77-45bb-8cb6-a297e9d0cdf4,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:47.486892 env[1733]: time="2024-02-09T09:48:47.486758869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:47.486892 env[1733]: time="2024-02-09T09:48:47.486834350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:47.487801 env[1733]: time="2024-02-09T09:48:47.486861386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:47.487801 env[1733]: time="2024-02-09T09:48:47.487554261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b40939cd4d4dd48313dee4ec103c636aae527f093796c1a8b0540afbf92209f3 pid=4021 runtime=io.containerd.runc.v2 Feb 9 09:48:47.511588 systemd[1]: Started cri-containerd-b40939cd4d4dd48313dee4ec103c636aae527f093796c1a8b0540afbf92209f3.scope. Feb 9 09:48:47.517744 kubelet[2157]: E0209 09:48:47.514244 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:47.518295 env[1733]: time="2024-02-09T09:48:47.515785576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhpv5,Uid:e35e1835-5376-43c6-a7ef-3445f5d424b1,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:47.542514 env[1733]: time="2024-02-09T09:48:47.542351059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:47.542658 env[1733]: time="2024-02-09T09:48:47.542540073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:47.542658 env[1733]: time="2024-02-09T09:48:47.542605318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:47.542945 env[1733]: time="2024-02-09T09:48:47.542872884Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86 pid=4055 runtime=io.containerd.runc.v2 Feb 9 09:48:47.576825 systemd[1]: Started cri-containerd-ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86.scope. Feb 9 09:48:47.624122 env[1733]: time="2024-02-09T09:48:47.624053580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-wdchv,Uid:885d7963-ce77-45bb-8cb6-a297e9d0cdf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b40939cd4d4dd48313dee4ec103c636aae527f093796c1a8b0540afbf92209f3\"" Feb 9 09:48:47.627179 env[1733]: time="2024-02-09T09:48:47.627068885Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:48:47.644175 env[1733]: time="2024-02-09T09:48:47.644113777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xhpv5,Uid:e35e1835-5376-43c6-a7ef-3445f5d424b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\"" Feb 9 09:48:47.649446 env[1733]: time="2024-02-09T09:48:47.649390096Z" level=info msg="CreateContainer within sandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:48:47.671368 env[1733]: time="2024-02-09T09:48:47.671279590Z" level=info msg="CreateContainer within sandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\"" Feb 9 09:48:47.672331 env[1733]: time="2024-02-09T09:48:47.672289904Z" level=info msg="StartContainer for \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\"" Feb 9 09:48:47.701539 systemd[1]: Started cri-containerd-3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc.scope. Feb 9 09:48:47.728499 systemd[1]: cri-containerd-3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc.scope: Deactivated successfully. Feb 9 09:48:47.754203 env[1733]: time="2024-02-09T09:48:47.754125386Z" level=info msg="shim disconnected" id=3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc Feb 9 09:48:47.754203 env[1733]: time="2024-02-09T09:48:47.754197902Z" level=warning msg="cleaning up after shim disconnected" id=3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc namespace=k8s.io Feb 9 09:48:47.754653 env[1733]: time="2024-02-09T09:48:47.754220547Z" level=info msg="cleaning up dead shim" Feb 9 09:48:47.770882 env[1733]: time="2024-02-09T09:48:47.770805474Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4117 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:48:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:48:47.771437 env[1733]: time="2024-02-09T09:48:47.771274030Z" level=error msg="copy shim log" error="read /proc/self/fd/61: file already closed" Feb 9 09:48:47.774824 env[1733]: time="2024-02-09T09:48:47.774738428Z" level=error msg="Failed to pipe stderr of container \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\"" error="reading from a closed fifo" Feb 9 09:48:47.775099 env[1733]: time="2024-02-09T09:48:47.775030378Z" level=error msg="Failed to pipe stdout of container \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\"" error="reading from a closed fifo" Feb 9 09:48:47.777086 env[1733]: time="2024-02-09T09:48:47.776997365Z" level=error msg="StartContainer for \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:48:47.778319 kubelet[2157]: E0209 09:48:47.777593 2157 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc" Feb 9 09:48:47.778319 kubelet[2157]: E0209 09:48:47.778136 2157 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:48:47.778319 kubelet[2157]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:48:47.778319 kubelet[2157]: rm /hostbin/cilium-mount Feb 9 09:48:47.778741 kubelet[2157]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4v4j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-xhpv5_kube-system(e35e1835-5376-43c6-a7ef-3445f5d424b1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:48:47.778901 kubelet[2157]: E0209 09:48:47.778220 2157 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xhpv5" podUID=e35e1835-5376-43c6-a7ef-3445f5d424b1 Feb 9 09:48:47.981416 env[1733]: time="2024-02-09T09:48:47.979842310Z" level=info msg="StopPodSandbox for \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\"" Feb 9 09:48:47.981919 env[1733]: time="2024-02-09T09:48:47.981869585Z" level=info msg="Container to stop \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:47.994796 systemd[1]: cri-containerd-ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86.scope: Deactivated successfully. Feb 9 09:48:48.042608 env[1733]: time="2024-02-09T09:48:48.042531828Z" level=info msg="shim disconnected" id=ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86 Feb 9 09:48:48.042608 env[1733]: time="2024-02-09T09:48:48.042603853Z" level=warning msg="cleaning up after shim disconnected" id=ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86 namespace=k8s.io Feb 9 09:48:48.042938 env[1733]: time="2024-02-09T09:48:48.042626773Z" level=info msg="cleaning up dead shim" Feb 9 09:48:48.057150 env[1733]: time="2024-02-09T09:48:48.057079845Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4148 runtime=io.containerd.runc.v2\n" Feb 9 09:48:48.057698 env[1733]: time="2024-02-09T09:48:48.057650499Z" level=info msg="TearDown network for sandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" successfully" Feb 9 09:48:48.057815 env[1733]: time="2024-02-09T09:48:48.057700179Z" level=info msg="StopPodSandbox for \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" returns successfully" Feb 9 09:48:48.117826 kubelet[2157]: I0209 09:48:48.117781 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-net\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118054 kubelet[2157]: I0209 09:48:48.117848 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-xtables-lock\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118054 kubelet[2157]: I0209 09:48:48.117897 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cni-path\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118054 kubelet[2157]: I0209 09:48:48.117947 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v4j4\" (UniqueName: \"kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-kube-api-access-4v4j4\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118054 kubelet[2157]: I0209 09:48:48.117996 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-clustermesh-secrets\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118054 kubelet[2157]: I0209 09:48:48.118036 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-etc-cni-netd\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118366 kubelet[2157]: I0209 09:48:48.118080 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-hubble-tls\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118366 kubelet[2157]: I0209 09:48:48.118120 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-bpf-maps\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118366 kubelet[2157]: I0209 09:48:48.118159 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-run\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118366 kubelet[2157]: I0209 09:48:48.118202 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-config-path\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118366 kubelet[2157]: I0209 09:48:48.118242 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-cgroup\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118366 kubelet[2157]: I0209 09:48:48.118282 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-lib-modules\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118799 kubelet[2157]: I0209 09:48:48.118324 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-kernel\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118799 kubelet[2157]: I0209 09:48:48.118369 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-ipsec-secrets\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118799 kubelet[2157]: I0209 09:48:48.118412 2157 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-hostproc\") pod \"e35e1835-5376-43c6-a7ef-3445f5d424b1\" (UID: \"e35e1835-5376-43c6-a7ef-3445f5d424b1\") " Feb 9 09:48:48.118799 kubelet[2157]: I0209 09:48:48.118519 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-hostproc" (OuterVolumeSpecName: "hostproc") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.118799 kubelet[2157]: I0209 09:48:48.118570 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.119131 kubelet[2157]: I0209 09:48:48.118609 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.119131 kubelet[2157]: I0209 09:48:48.118648 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cni-path" (OuterVolumeSpecName: "cni-path") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.119269 kubelet[2157]: I0209 09:48:48.119136 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.119551 kubelet[2157]: I0209 09:48:48.119515 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.119873 kubelet[2157]: I0209 09:48:48.119840 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.119970 kubelet[2157]: I0209 09:48:48.119898 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.120164 kubelet[2157]: W0209 09:48:48.120113 2157 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e35e1835-5376-43c6-a7ef-3445f5d424b1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:48:48.125032 kubelet[2157]: I0209 09:48:48.124551 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.125192 kubelet[2157]: I0209 09:48:48.125111 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:48.125192 kubelet[2157]: I0209 09:48:48.125175 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:48.132138 kubelet[2157]: I0209 09:48:48.132072 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-kube-api-access-4v4j4" (OuterVolumeSpecName: "kube-api-access-4v4j4") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "kube-api-access-4v4j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:48.132916 kubelet[2157]: I0209 09:48:48.132871 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:48.133939 kubelet[2157]: I0209 09:48:48.133870 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:48.137623 kubelet[2157]: I0209 09:48:48.137571 2157 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e35e1835-5376-43c6-a7ef-3445f5d424b1" (UID: "e35e1835-5376-43c6-a7ef-3445f5d424b1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:48.218917 kubelet[2157]: I0209 09:48:48.218875 2157 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-ipsec-secrets\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.219158 kubelet[2157]: I0209 09:48:48.219136 2157 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-hostproc\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.219285 kubelet[2157]: I0209 09:48:48.219265 2157 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-lib-modules\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.219403 kubelet[2157]: I0209 09:48:48.219384 2157 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-kernel\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.219573 kubelet[2157]: I0209 09:48:48.219553 2157 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cni-path\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.219702 kubelet[2157]: I0209 09:48:48.219683 2157 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-host-proc-sys-net\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.219825 kubelet[2157]: I0209 09:48:48.219806 2157 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-xtables-lock\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.219956 kubelet[2157]: I0209 09:48:48.219937 2157 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-bpf-maps\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.220079 kubelet[2157]: I0209 09:48:48.220060 2157 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-4v4j4\" (UniqueName: \"kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-kube-api-access-4v4j4\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.220201 kubelet[2157]: I0209 09:48:48.220181 2157 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e35e1835-5376-43c6-a7ef-3445f5d424b1-clustermesh-secrets\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.220321 kubelet[2157]: I0209 09:48:48.220302 2157 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-etc-cni-netd\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.220445 kubelet[2157]: I0209 09:48:48.220422 2157 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e35e1835-5376-43c6-a7ef-3445f5d424b1-hubble-tls\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.220594 kubelet[2157]: I0209 09:48:48.220574 2157 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-run\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.220716 kubelet[2157]: I0209 09:48:48.220697 2157 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-config-path\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.220841 kubelet[2157]: I0209 09:48:48.220822 2157 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e35e1835-5376-43c6-a7ef-3445f5d424b1-cilium-cgroup\") on node \"172.31.16.94\" DevicePath \"\"" Feb 9 09:48:48.336099 systemd[1]: var-lib-kubelet-pods-e35e1835\x2d5376\x2d43c6\x2da7ef\x2d3445f5d424b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4v4j4.mount: Deactivated successfully. Feb 9 09:48:48.336272 systemd[1]: var-lib-kubelet-pods-e35e1835\x2d5376\x2d43c6\x2da7ef\x2d3445f5d424b1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:48.336406 systemd[1]: var-lib-kubelet-pods-e35e1835\x2d5376\x2d43c6\x2da7ef\x2d3445f5d424b1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:48:48.337407 systemd[1]: var-lib-kubelet-pods-e35e1835\x2d5376\x2d43c6\x2da7ef\x2d3445f5d424b1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:48.515084 kubelet[2157]: E0209 09:48:48.515050 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:48.987100 kubelet[2157]: I0209 09:48:48.987066 2157 scope.go:115] "RemoveContainer" containerID="3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc" Feb 9 09:48:48.989969 env[1733]: time="2024-02-09T09:48:48.989894661Z" level=info msg="RemoveContainer for \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\"" Feb 9 09:48:48.995202 env[1733]: time="2024-02-09T09:48:48.995136216Z" level=info msg="RemoveContainer for \"3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc\" returns successfully" Feb 9 09:48:49.000685 systemd[1]: Removed slice kubepods-burstable-pode35e1835_5376_43c6_a7ef_3445f5d424b1.slice. Feb 9 09:48:49.099700 kubelet[2157]: I0209 09:48:49.099651 2157 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:48:49.099899 kubelet[2157]: E0209 09:48:49.099733 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e35e1835-5376-43c6-a7ef-3445f5d424b1" containerName="mount-cgroup" Feb 9 09:48:49.099899 kubelet[2157]: I0209 09:48:49.099779 2157 memory_manager.go:346] "RemoveStaleState removing state" podUID="e35e1835-5376-43c6-a7ef-3445f5d424b1" containerName="mount-cgroup" Feb 9 09:48:49.109064 systemd[1]: Created slice kubepods-burstable-pod02e8a45b_be06_41e1_b320_010d9fc4e93d.slice. Feb 9 09:48:49.226321 kubelet[2157]: I0209 09:48:49.226242 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-bpf-maps\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226511 kubelet[2157]: I0209 09:48:49.226333 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02e8a45b-be06-41e1-b320-010d9fc4e93d-cilium-ipsec-secrets\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226511 kubelet[2157]: I0209 09:48:49.226412 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-host-proc-sys-kernel\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226669 kubelet[2157]: I0209 09:48:49.226532 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02e8a45b-be06-41e1-b320-010d9fc4e93d-hubble-tls\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226669 kubelet[2157]: I0209 09:48:49.226608 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t226m\" (UniqueName: \"kubernetes.io/projected/02e8a45b-be06-41e1-b320-010d9fc4e93d-kube-api-access-t226m\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226802 kubelet[2157]: I0209 09:48:49.226682 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-cni-path\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226802 kubelet[2157]: I0209 09:48:49.226729 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02e8a45b-be06-41e1-b320-010d9fc4e93d-clustermesh-secrets\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226802 kubelet[2157]: I0209 09:48:49.226797 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-etc-cni-netd\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226971 kubelet[2157]: I0209 09:48:49.226867 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-lib-modules\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.226971 kubelet[2157]: I0209 09:48:49.226937 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-xtables-lock\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.227102 kubelet[2157]: I0209 09:48:49.226987 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02e8a45b-be06-41e1-b320-010d9fc4e93d-cilium-config-path\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.227102 kubelet[2157]: I0209 09:48:49.227055 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-host-proc-sys-net\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.227226 kubelet[2157]: I0209 09:48:49.227128 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-hostproc\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.227226 kubelet[2157]: I0209 09:48:49.227197 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-cilium-cgroup\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.227348 kubelet[2157]: I0209 09:48:49.227250 2157 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02e8a45b-be06-41e1-b320-010d9fc4e93d-cilium-run\") pod \"cilium-2qn27\" (UID: \"02e8a45b-be06-41e1-b320-010d9fc4e93d\") " pod="kube-system/cilium-2qn27" Feb 9 09:48:49.423930 env[1733]: time="2024-02-09T09:48:49.423864017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qn27,Uid:02e8a45b-be06-41e1-b320-010d9fc4e93d,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:49.451446 env[1733]: time="2024-02-09T09:48:49.451334326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:49.451703 env[1733]: time="2024-02-09T09:48:49.451409399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:49.451703 env[1733]: time="2024-02-09T09:48:49.451436735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:49.452102 env[1733]: time="2024-02-09T09:48:49.451996025Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6 pid=4179 runtime=io.containerd.runc.v2 Feb 9 09:48:49.486001 systemd[1]: Started cri-containerd-a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6.scope. Feb 9 09:48:49.515853 kubelet[2157]: E0209 09:48:49.515809 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:49.532249 env[1733]: time="2024-02-09T09:48:49.532174498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qn27,Uid:02e8a45b-be06-41e1-b320-010d9fc4e93d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\"" Feb 9 09:48:49.537965 env[1733]: time="2024-02-09T09:48:49.537911190Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:48:49.555276 env[1733]: time="2024-02-09T09:48:49.555183163Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78\"" Feb 9 09:48:49.556330 env[1733]: time="2024-02-09T09:48:49.556272798Z" level=info msg="StartContainer for \"17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78\"" Feb 9 09:48:49.585044 systemd[1]: Started cri-containerd-17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78.scope. Feb 9 09:48:49.627205 kubelet[2157]: E0209 09:48:49.627145 2157 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:49.642449 env[1733]: time="2024-02-09T09:48:49.642383817Z" level=info msg="StartContainer for \"17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78\" returns successfully" Feb 9 09:48:49.656328 systemd[1]: cri-containerd-17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78.scope: Deactivated successfully. Feb 9 09:48:49.715588 env[1733]: time="2024-02-09T09:48:49.715410424Z" level=info msg="shim disconnected" id=17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78 Feb 9 09:48:49.715588 env[1733]: time="2024-02-09T09:48:49.715497629Z" level=warning msg="cleaning up after shim disconnected" id=17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78 namespace=k8s.io Feb 9 09:48:49.715588 env[1733]: time="2024-02-09T09:48:49.715520477Z" level=info msg="cleaning up dead shim" Feb 9 09:48:49.731223 env[1733]: time="2024-02-09T09:48:49.731161142Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4264 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:48:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 09:48:49.738683 env[1733]: time="2024-02-09T09:48:49.738619767Z" level=info msg="StopPodSandbox for \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\"" Feb 9 09:48:49.738871 env[1733]: time="2024-02-09T09:48:49.738768653Z" level=info msg="TearDown network for sandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" successfully" Feb 9 09:48:49.738871 env[1733]: time="2024-02-09T09:48:49.738829025Z" level=info msg="StopPodSandbox for \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" returns successfully" Feb 9 09:48:49.742686 kubelet[2157]: I0209 09:48:49.742487 2157 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e35e1835-5376-43c6-a7ef-3445f5d424b1 path="/var/lib/kubelet/pods/e35e1835-5376-43c6-a7ef-3445f5d424b1/volumes" Feb 9 09:48:50.002992 env[1733]: time="2024-02-09T09:48:50.002850715Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:48:50.031966 env[1733]: time="2024-02-09T09:48:50.031885290Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5\"" Feb 9 09:48:50.032792 env[1733]: time="2024-02-09T09:48:50.032713466Z" level=info msg="StartContainer for \"936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5\"" Feb 9 09:48:50.070609 systemd[1]: Started cri-containerd-936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5.scope. Feb 9 09:48:50.135743 env[1733]: time="2024-02-09T09:48:50.135671375Z" level=info msg="StartContainer for \"936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5\" returns successfully" Feb 9 09:48:50.147450 systemd[1]: cri-containerd-936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5.scope: Deactivated successfully. Feb 9 09:48:50.196957 env[1733]: time="2024-02-09T09:48:50.196841859Z" level=info msg="shim disconnected" id=936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5 Feb 9 09:48:50.196957 env[1733]: time="2024-02-09T09:48:50.196945456Z" level=warning msg="cleaning up after shim disconnected" id=936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5 namespace=k8s.io Feb 9 09:48:50.197322 env[1733]: time="2024-02-09T09:48:50.196968533Z" level=info msg="cleaning up dead shim" Feb 9 09:48:50.212532 env[1733]: time="2024-02-09T09:48:50.212411005Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4326 runtime=io.containerd.runc.v2\n" Feb 9 09:48:50.411802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455223988.mount: Deactivated successfully. Feb 9 09:48:50.516905 kubelet[2157]: E0209 09:48:50.516819 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:50.863494 kubelet[2157]: W0209 09:48:50.860776 2157 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode35e1835_5376_43c6_a7ef_3445f5d424b1.slice/cri-containerd-3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc.scope WatchSource:0}: container "3432c2a89126145a5c88857600d06368ae894473e71ebca648103d5e2234efdc" in namespace "k8s.io": not found Feb 9 09:48:51.008346 env[1733]: time="2024-02-09T09:48:51.008289997Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:48:51.031363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425504333.mount: Deactivated successfully. Feb 9 09:48:51.043520 env[1733]: time="2024-02-09T09:48:51.043428483Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f\"" Feb 9 09:48:51.044779 env[1733]: time="2024-02-09T09:48:51.044730664Z" level=info msg="StartContainer for \"c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f\"" Feb 9 09:48:51.083576 systemd[1]: Started cri-containerd-c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f.scope. Feb 9 09:48:51.168680 env[1733]: time="2024-02-09T09:48:51.167190157Z" level=info msg="StartContainer for \"c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f\" returns successfully" Feb 9 09:48:51.167670 systemd[1]: cri-containerd-c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f.scope: Deactivated successfully. Feb 9 09:48:51.405127 env[1733]: time="2024-02-09T09:48:51.405065155Z" level=info msg="shim disconnected" id=c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f Feb 9 09:48:51.405540 env[1733]: time="2024-02-09T09:48:51.405494087Z" level=warning msg="cleaning up after shim disconnected" id=c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f namespace=k8s.io Feb 9 09:48:51.405703 env[1733]: time="2024-02-09T09:48:51.405671941Z" level=info msg="cleaning up dead shim" Feb 9 09:48:51.429174 env[1733]: time="2024-02-09T09:48:51.429032082Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4387 runtime=io.containerd.runc.v2\n" Feb 9 09:48:51.430400 env[1733]: time="2024-02-09T09:48:51.430338235Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:51.433751 env[1733]: time="2024-02-09T09:48:51.433684936Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:51.436742 env[1733]: time="2024-02-09T09:48:51.436694278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:51.437795 env[1733]: time="2024-02-09T09:48:51.437747865Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:48:51.441650 env[1733]: time="2024-02-09T09:48:51.441598535Z" level=info msg="CreateContainer within sandbox \"b40939cd4d4dd48313dee4ec103c636aae527f093796c1a8b0540afbf92209f3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:48:51.462729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726538352.mount: Deactivated successfully. Feb 9 09:48:51.474599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921760954.mount: Deactivated successfully. Feb 9 09:48:51.482793 env[1733]: time="2024-02-09T09:48:51.482703505Z" level=info msg="CreateContainer within sandbox \"b40939cd4d4dd48313dee4ec103c636aae527f093796c1a8b0540afbf92209f3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7d69043adf39d03d8208cc81b37630f48abadaea94e2d84530d5dcee82c3f13e\"" Feb 9 09:48:51.484017 env[1733]: time="2024-02-09T09:48:51.483971414Z" level=info msg="StartContainer for \"7d69043adf39d03d8208cc81b37630f48abadaea94e2d84530d5dcee82c3f13e\"" Feb 9 09:48:51.512495 systemd[1]: Started cri-containerd-7d69043adf39d03d8208cc81b37630f48abadaea94e2d84530d5dcee82c3f13e.scope. Feb 9 09:48:51.517343 kubelet[2157]: E0209 09:48:51.517245 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:51.568236 env[1733]: time="2024-02-09T09:48:51.568171729Z" level=info msg="StartContainer for \"7d69043adf39d03d8208cc81b37630f48abadaea94e2d84530d5dcee82c3f13e\" returns successfully" Feb 9 09:48:52.016598 env[1733]: time="2024-02-09T09:48:52.016541438Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:48:52.037341 env[1733]: time="2024-02-09T09:48:52.037279110Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6\"" Feb 9 09:48:52.038339 env[1733]: time="2024-02-09T09:48:52.038297417Z" level=info msg="StartContainer for \"497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6\"" Feb 9 09:48:52.073740 systemd[1]: Started cri-containerd-497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6.scope. Feb 9 09:48:52.114544 kubelet[2157]: I0209 09:48:52.113708 2157 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-wdchv" podStartSLOduration=-9.223372031741144e+09 pod.CreationTimestamp="2024-02-09 09:48:47 +0000 UTC" firstStartedPulling="2024-02-09 09:48:47.626383703 +0000 UTC m=+90.461318496" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:52.065129578 +0000 UTC m=+94.900064431" watchObservedRunningTime="2024-02-09 09:48:52.113631606 +0000 UTC m=+94.948566423" Feb 9 09:48:52.130627 systemd[1]: cri-containerd-497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6.scope: Deactivated successfully. Feb 9 09:48:52.135331 env[1733]: time="2024-02-09T09:48:52.135158718Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e8a45b_be06_41e1_b320_010d9fc4e93d.slice/cri-containerd-497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6.scope/memory.events\": no such file or directory" Feb 9 09:48:52.138073 env[1733]: time="2024-02-09T09:48:52.138017759Z" level=info msg="StartContainer for \"497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6\" returns successfully" Feb 9 09:48:52.187022 env[1733]: time="2024-02-09T09:48:52.186952822Z" level=info msg="shim disconnected" id=497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6 Feb 9 09:48:52.187425 env[1733]: time="2024-02-09T09:48:52.187384407Z" level=warning msg="cleaning up after shim disconnected" id=497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6 namespace=k8s.io Feb 9 09:48:52.187602 env[1733]: time="2024-02-09T09:48:52.187573265Z" level=info msg="cleaning up dead shim" Feb 9 09:48:52.201880 env[1733]: time="2024-02-09T09:48:52.201824272Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4482 runtime=io.containerd.runc.v2\n" Feb 9 09:48:52.517616 kubelet[2157]: E0209 09:48:52.517554 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:53.025228 env[1733]: time="2024-02-09T09:48:53.025115971Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:48:53.053925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624184173.mount: Deactivated successfully. Feb 9 09:48:53.065962 env[1733]: time="2024-02-09T09:48:53.065876016Z" level=info msg="CreateContainer within sandbox \"a164cb20fffd546c1cbad3f0cb1bc91dc3c5349bc3502730ca137782bb3105a6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44\"" Feb 9 09:48:53.067185 env[1733]: time="2024-02-09T09:48:53.066591979Z" level=info msg="StartContainer for \"4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44\"" Feb 9 09:48:53.102658 systemd[1]: Started cri-containerd-4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44.scope. Feb 9 09:48:53.174392 env[1733]: time="2024-02-09T09:48:53.174327918Z" level=info msg="StartContainer for \"4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44\" returns successfully" Feb 9 09:48:53.518262 kubelet[2157]: E0209 09:48:53.518201 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:53.634841 kubelet[2157]: I0209 09:48:53.634787 2157 setters.go:548] "Node became not ready" node="172.31.16.94" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 09:48:53.634694161 +0000 UTC m=+96.469628954 LastTransitionTime:2024-02-09 09:48:53.634694161 +0000 UTC m=+96.469628954 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 09:48:53.880509 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:48:53.984107 kubelet[2157]: W0209 09:48:53.983122 2157 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e8a45b_be06_41e1_b320_010d9fc4e93d.slice/cri-containerd-17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78.scope WatchSource:0}: task 17aca57e3a63a76a7607d8f773181730cdb31eb994e8b6c25567625eaf466d78 not found: not found Feb 9 09:48:54.072331 kubelet[2157]: I0209 09:48:54.072289 2157 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2qn27" podStartSLOduration=5.072237943 pod.CreationTimestamp="2024-02-09 09:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:54.071238861 +0000 UTC m=+96.906173714" watchObservedRunningTime="2024-02-09 09:48:54.072237943 +0000 UTC m=+96.907172736" Feb 9 09:48:54.456344 systemd[1]: run-containerd-runc-k8s.io-4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44-runc.V3DdpN.mount: Deactivated successfully. Feb 9 09:48:54.519176 kubelet[2157]: E0209 09:48:54.519104 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:55.519986 kubelet[2157]: E0209 09:48:55.519916 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:56.520627 kubelet[2157]: E0209 09:48:56.520583 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:56.701272 systemd[1]: run-containerd-runc-k8s.io-4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44-runc.xKolkg.mount: Deactivated successfully. Feb 9 09:48:57.098667 kubelet[2157]: W0209 09:48:57.098586 2157 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e8a45b_be06_41e1_b320_010d9fc4e93d.slice/cri-containerd-936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5.scope WatchSource:0}: task 936060e15678e1c5da75c38bad4b74cc2af2a42d4591e68e3cceb0c4c824fcc5 not found: not found Feb 9 09:48:57.522279 kubelet[2157]: E0209 09:48:57.522079 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:57.635371 systemd-networkd[1533]: lxc_health: Link UP Feb 9 09:48:57.638756 (udev-worker)[5046]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:57.641329 (udev-worker)[5045]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:57.690746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:48:57.690700 systemd-networkd[1533]: lxc_health: Gained carrier Feb 9 09:48:58.522998 kubelet[2157]: E0209 09:48:58.522955 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:58.943746 systemd[1]: run-containerd-runc-k8s.io-4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44-runc.iHApDB.mount: Deactivated successfully. Feb 9 09:48:59.212196 systemd-networkd[1533]: lxc_health: Gained IPv6LL Feb 9 09:48:59.433341 kubelet[2157]: E0209 09:48:59.433285 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:59.524946 kubelet[2157]: E0209 09:48:59.524778 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:00.208856 kubelet[2157]: W0209 09:49:00.208791 2157 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e8a45b_be06_41e1_b320_010d9fc4e93d.slice/cri-containerd-c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f.scope WatchSource:0}: task c850a8122e08e24f7c5fb49f91d03a7776287d67274ed99593debaabf3ec103f not found: not found Feb 9 09:49:00.526055 kubelet[2157]: E0209 09:49:00.525906 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:01.360493 systemd[1]: run-containerd-runc-k8s.io-4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44-runc.ucbJCW.mount: Deactivated successfully. Feb 9 09:49:01.526697 kubelet[2157]: E0209 09:49:01.526636 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:02.526809 kubelet[2157]: E0209 09:49:02.526757 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:03.333017 kubelet[2157]: W0209 09:49:03.332934 2157 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02e8a45b_be06_41e1_b320_010d9fc4e93d.slice/cri-containerd-497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6.scope WatchSource:0}: task 497596ee232b01ce28536c4d30cdf0cd9f0cd77b7bdae1fdf5a7556879af18f6 not found: not found Feb 9 09:49:03.533802 kubelet[2157]: E0209 09:49:03.533706 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:03.623532 systemd[1]: run-containerd-runc-k8s.io-4c10d85fa2b07087698463436bd86752c9a771ba947579aa0e7c54fb5610aa44-runc.9PUO19.mount: Deactivated successfully. Feb 9 09:49:04.534221 kubelet[2157]: E0209 09:49:04.534155 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:05.535088 kubelet[2157]: E0209 09:49:05.535026 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:06.535403 kubelet[2157]: E0209 09:49:06.535328 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:07.536562 kubelet[2157]: E0209 09:49:07.536520 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:08.537326 kubelet[2157]: E0209 09:49:08.537281 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:09.538377 kubelet[2157]: E0209 09:49:09.538308 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:10.539600 kubelet[2157]: E0209 09:49:10.539559 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:11.541283 kubelet[2157]: E0209 09:49:11.541240 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:12.542403 kubelet[2157]: E0209 09:49:12.542335 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:13.543233 kubelet[2157]: E0209 09:49:13.543166 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:14.543953 kubelet[2157]: E0209 09:49:14.543889 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:15.544658 kubelet[2157]: E0209 09:49:15.544561 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:16.545620 kubelet[2157]: E0209 09:49:16.545544 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:17.546376 kubelet[2157]: E0209 09:49:17.546313 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:18.547046 kubelet[2157]: E0209 09:49:18.546975 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:19.433647 kubelet[2157]: E0209 09:49:19.433586 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:19.455196 env[1733]: time="2024-02-09T09:49:19.455142081Z" level=info msg="StopPodSandbox for \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\"" Feb 9 09:49:19.456003 env[1733]: time="2024-02-09T09:49:19.455930274Z" level=info msg="TearDown network for sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" successfully" Feb 9 09:49:19.456138 env[1733]: time="2024-02-09T09:49:19.456105536Z" level=info msg="StopPodSandbox for \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" returns successfully" Feb 9 09:49:19.456966 env[1733]: time="2024-02-09T09:49:19.456896922Z" level=info msg="RemovePodSandbox for \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\"" Feb 9 09:49:19.457105 env[1733]: time="2024-02-09T09:49:19.456957378Z" level=info msg="Forcibly stopping sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\"" Feb 9 09:49:19.457217 env[1733]: time="2024-02-09T09:49:19.457093028Z" level=info msg="TearDown network for sandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" successfully" Feb 9 09:49:19.464033 env[1733]: time="2024-02-09T09:49:19.463967583Z" level=info msg="RemovePodSandbox \"721ca5c6020c1645a515fcf3660d287bda671a40a0ab9269cd2768264b845dc2\" returns successfully" Feb 9 09:49:19.464759 env[1733]: time="2024-02-09T09:49:19.464706600Z" level=info msg="StopPodSandbox for \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\"" Feb 9 09:49:19.464927 env[1733]: time="2024-02-09T09:49:19.464854909Z" level=info msg="TearDown network for sandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" successfully" Feb 9 09:49:19.465005 env[1733]: time="2024-02-09T09:49:19.464922902Z" level=info msg="StopPodSandbox for \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" returns successfully" Feb 9 09:49:19.466254 env[1733]: time="2024-02-09T09:49:19.466189757Z" level=info msg="RemovePodSandbox for \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\"" Feb 9 09:49:19.466416 env[1733]: time="2024-02-09T09:49:19.466248881Z" level=info msg="Forcibly stopping sandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\"" Feb 9 09:49:19.466416 env[1733]: time="2024-02-09T09:49:19.466379239Z" level=info msg="TearDown network for sandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" successfully" Feb 9 09:49:19.471051 env[1733]: time="2024-02-09T09:49:19.470945904Z" level=info msg="RemovePodSandbox \"ef119e9a83f6a70761d7814cdd20d127b225b75792c6c49e8a50be78aee2bd86\" returns successfully" Feb 9 09:49:19.547603 kubelet[2157]: E0209 09:49:19.547540 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:20.548191 kubelet[2157]: E0209 09:49:20.548145 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:21.549585 kubelet[2157]: E0209 09:49:21.549545 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:22.550457 kubelet[2157]: E0209 09:49:22.550394 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:23.551432 kubelet[2157]: E0209 09:49:23.551386 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:24.469428 kubelet[2157]: E0209 09:49:24.469368 2157 controller.go:189] failed to update lease, error: Put "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 09:49:24.553056 kubelet[2157]: E0209 09:49:24.553001 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:25.553961 kubelet[2157]: E0209 09:49:25.553916 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:26.554956 kubelet[2157]: E0209 09:49:26.554889 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:27.555072 kubelet[2157]: E0209 09:49:27.555030 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:28.556753 kubelet[2157]: E0209 09:49:28.556706 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:29.557673 kubelet[2157]: E0209 09:49:29.557604 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:30.558221 kubelet[2157]: E0209 09:49:30.558167 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:31.559999 kubelet[2157]: E0209 09:49:31.559958 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:32.560886 kubelet[2157]: E0209 09:49:32.560830 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:33.561294 kubelet[2157]: E0209 09:49:33.561200 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:34.241067 kubelet[2157]: E0209 09:49:34.241005 2157 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T09:49:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T09:49:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T09:49:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T09:49:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":55608803},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22\\\",\\\"registry.k8s.io/kube-proxy:v1.26.13\\\"],\\\"sizeBytes\\\":21139040},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":253553}]}}\" for node \"172.31.16.94\": Patch \"https://172.31.26.94:6443/api/v1/nodes/172.31.16.94/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 9 09:49:34.470373 kubelet[2157]: E0209 09:49:34.470309 2157 controller.go:189] failed to update lease, error: Put "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 09:49:34.561584 kubelet[2157]: E0209 09:49:34.561546 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:35.002222 kubelet[2157]: E0209 09:49:35.001733 2157 controller.go:189] failed to update lease, error: Put "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": unexpected EOF Feb 9 09:49:35.015592 kubelet[2157]: E0209 09:49:35.015526 2157 controller.go:189] failed to update lease, error: Put "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": read tcp 172.31.16.94:58664->172.31.26.94:6443: read: connection reset by peer Feb 9 09:49:35.016263 kubelet[2157]: E0209 09:49:35.016213 2157 controller.go:189] failed to update lease, error: Put "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": dial tcp 172.31.26.94:6443: connect: connection refused Feb 9 09:49:35.016263 kubelet[2157]: I0209 09:49:35.016261 2157 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 9 09:49:35.016901 kubelet[2157]: E0209 09:49:35.016838 2157 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": dial tcp 172.31.26.94:6443: connect: connection refused Feb 9 09:49:35.217903 kubelet[2157]: E0209 09:49:35.217843 2157 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": dial tcp 172.31.26.94:6443: connect: connection refused Feb 9 09:49:35.562625 kubelet[2157]: E0209 09:49:35.562548 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:35.618970 kubelet[2157]: E0209 09:49:35.618922 2157 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": dial tcp 172.31.26.94:6443: connect: connection refused Feb 9 09:49:36.003680 kubelet[2157]: E0209 09:49:36.003321 2157 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.94\": Get \"https://172.31.26.94:6443/api/v1/nodes/172.31.16.94?timeout=10s\": dial tcp 172.31.26.94:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 9 09:49:36.004101 kubelet[2157]: E0209 09:49:36.004067 2157 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.94\": Get \"https://172.31.26.94:6443/api/v1/nodes/172.31.16.94?timeout=10s\": dial tcp 172.31.26.94:6443: connect: connection refused" Feb 9 09:49:36.004942 kubelet[2157]: E0209 09:49:36.004909 2157 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.94\": Get \"https://172.31.26.94:6443/api/v1/nodes/172.31.16.94?timeout=10s\": dial tcp 172.31.26.94:6443: connect: connection refused" Feb 9 09:49:36.005552 kubelet[2157]: E0209 09:49:36.005517 2157 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.94\": Get \"https://172.31.26.94:6443/api/v1/nodes/172.31.16.94?timeout=10s\": dial tcp 172.31.26.94:6443: connect: connection refused" Feb 9 09:49:36.005689 kubelet[2157]: E0209 09:49:36.005555 2157 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 9 09:49:36.563781 kubelet[2157]: E0209 09:49:36.563720 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:37.564268 kubelet[2157]: E0209 09:49:37.564164 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:38.564989 kubelet[2157]: E0209 09:49:38.564929 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:39.433428 kubelet[2157]: E0209 09:49:39.433386 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:39.565168 kubelet[2157]: E0209 09:49:39.565098 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:40.565599 kubelet[2157]: E0209 09:49:40.565530 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:41.566626 kubelet[2157]: E0209 09:49:41.566587 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:42.568094 kubelet[2157]: E0209 09:49:42.568039 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:43.569799 kubelet[2157]: E0209 09:49:43.569733 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:44.570309 kubelet[2157]: E0209 09:49:44.570246 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:45.570905 kubelet[2157]: E0209 09:49:45.570822 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:46.420084 kubelet[2157]: E0209 09:49:46.419963 2157 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 9 09:49:46.571095 kubelet[2157]: E0209 09:49:46.571025 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:47.571811 kubelet[2157]: E0209 09:49:47.571748 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:48.572584 kubelet[2157]: E0209 09:49:48.572541 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:49.574346 kubelet[2157]: E0209 09:49:49.574290 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:50.575428 kubelet[2157]: E0209 09:49:50.575359 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:51.576324 kubelet[2157]: E0209 09:49:51.576280 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:52.577125 kubelet[2157]: E0209 09:49:52.577052 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:53.577446 kubelet[2157]: E0209 09:49:53.577398 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:54.578485 kubelet[2157]: E0209 09:49:54.578345 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:55.579548 kubelet[2157]: E0209 09:49:55.579505 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:56.320906 kubelet[2157]: E0209 09:49:56.320868 2157 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.94\": Get \"https://172.31.26.94:6443/api/v1/nodes/172.31.16.94?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 9 09:49:56.580721 kubelet[2157]: E0209 09:49:56.580585 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:57.582017 kubelet[2157]: E0209 09:49:57.581971 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:58.021737 kubelet[2157]: E0209 09:49:58.021577 2157 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.26.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.94?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 9 09:49:58.583697 kubelet[2157]: E0209 09:49:58.583630 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:59.433415 kubelet[2157]: E0209 09:49:59.433339 2157 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:59.584165 kubelet[2157]: E0209 09:49:59.584122 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:50:00.585609 kubelet[2157]: E0209 09:50:00.585491 2157 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"