Feb 9 10:05:46.728311 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 10:05:46.728330 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 10:05:46.728338 kernel: efi: EFI v2.70 by EDK II Feb 9 10:05:46.728344 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 10:05:46.728349 kernel: random: crng init done Feb 9 10:05:46.728354 kernel: ACPI: Early table checksum verification disabled Feb 9 10:05:46.728361 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 10:05:46.728367 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 10:05:46.728373 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728378 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728383 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728388 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728394 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728399 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728407 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728413 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728419 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 10:05:46.728424 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 10:05:46.728430 kernel: NUMA: Failed to initialise from firmware Feb 9 10:05:46.728435 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:05:46.728441 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Feb 9 10:05:46.728447 kernel: Zone ranges: Feb 9 10:05:46.728452 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:05:46.728459 kernel: DMA32 empty Feb 9 10:05:46.728464 kernel: Normal empty Feb 9 10:05:46.728470 kernel: Movable zone start for each node Feb 9 10:05:46.728475 kernel: Early memory node ranges Feb 9 10:05:46.728481 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 10:05:46.728503 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 10:05:46.728510 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 10:05:46.728516 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 10:05:46.728522 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 10:05:46.728527 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 10:05:46.728533 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 10:05:46.728538 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 10:05:46.728546 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 10:05:46.728552 kernel: psci: probing for conduit method from ACPI. Feb 9 10:05:46.728557 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 10:05:46.728563 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 10:05:46.728568 kernel: psci: Trusted OS migration not required Feb 9 10:05:46.728577 kernel: psci: SMC Calling Convention v1.1 Feb 9 10:05:46.728583 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 10:05:46.728590 kernel: ACPI: SRAT not present Feb 9 10:05:46.728597 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 10:05:46.728603 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 10:05:46.728609 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 10:05:46.728615 kernel: Detected PIPT I-cache on CPU0 Feb 9 10:05:46.728621 kernel: CPU features: detected: GIC system register CPU interface Feb 9 10:05:46.728627 kernel: CPU features: detected: Hardware dirty bit management Feb 9 10:05:46.728633 kernel: CPU features: detected: Spectre-v4 Feb 9 10:05:46.728639 kernel: CPU features: detected: Spectre-BHB Feb 9 10:05:46.728646 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 10:05:46.728652 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 10:05:46.728658 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 10:05:46.728664 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 10:05:46.728670 kernel: Policy zone: DMA Feb 9 10:05:46.728677 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:05:46.728684 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 10:05:46.728690 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 10:05:46.728696 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 10:05:46.728702 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 10:05:46.728708 kernel: Memory: 2459144K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113144K reserved, 0K cma-reserved) Feb 9 10:05:46.728716 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 10:05:46.728722 kernel: trace event string verifier disabled Feb 9 10:05:46.728728 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 10:05:46.728738 kernel: rcu: RCU event tracing is enabled. Feb 9 10:05:46.728744 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 10:05:46.728750 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 10:05:46.728757 kernel: Tracing variant of Tasks RCU enabled. Feb 9 10:05:46.728763 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 10:05:46.728769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 10:05:46.728775 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 10:05:46.728781 kernel: GICv3: 256 SPIs implemented Feb 9 10:05:46.728788 kernel: GICv3: 0 Extended SPIs implemented Feb 9 10:05:46.728794 kernel: GICv3: Distributor has no Range Selector support Feb 9 10:05:46.728800 kernel: Root IRQ handler: gic_handle_irq Feb 9 10:05:46.728805 kernel: GICv3: 16 PPIs implemented Feb 9 10:05:46.728811 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 10:05:46.728817 kernel: ACPI: SRAT not present Feb 9 10:05:46.728823 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 10:05:46.728829 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 10:05:46.728835 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 10:05:46.728841 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 10:05:46.728847 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 10:05:46.728853 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:05:46.728861 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 10:05:46.728867 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 10:05:46.728873 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 10:05:46.728879 kernel: arm-pv: using stolen time PV Feb 9 10:05:46.728886 kernel: Console: colour dummy device 80x25 Feb 9 10:05:46.728892 kernel: ACPI: Core revision 20210730 Feb 9 10:05:46.728898 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 10:05:46.728905 kernel: pid_max: default: 32768 minimum: 301 Feb 9 10:05:46.728911 kernel: LSM: Security Framework initializing Feb 9 10:05:46.728917 kernel: SELinux: Initializing. Feb 9 10:05:46.728924 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:05:46.728930 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 10:05:46.728937 kernel: rcu: Hierarchical SRCU implementation. Feb 9 10:05:46.728943 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 10:05:46.728949 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 10:05:46.728955 kernel: Remapping and enabling EFI services. Feb 9 10:05:46.728961 kernel: smp: Bringing up secondary CPUs ... Feb 9 10:05:46.728967 kernel: Detected PIPT I-cache on CPU1 Feb 9 10:05:46.728974 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 10:05:46.729001 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 10:05:46.729008 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:05:46.729014 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 10:05:46.729021 kernel: Detected PIPT I-cache on CPU2 Feb 9 10:05:46.729027 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 10:05:46.729033 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 10:05:46.729040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:05:46.729046 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 10:05:46.729052 kernel: Detected PIPT I-cache on CPU3 Feb 9 10:05:46.729058 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 10:05:46.729066 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 10:05:46.729072 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 10:05:46.729078 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 10:05:46.729085 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 10:05:46.729095 kernel: SMP: Total of 4 processors activated. Feb 9 10:05:46.729102 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 10:05:46.729109 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 10:05:46.729115 kernel: CPU features: detected: Common not Private translations Feb 9 10:05:46.729122 kernel: CPU features: detected: CRC32 instructions Feb 9 10:05:46.729128 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 10:05:46.729135 kernel: CPU features: detected: LSE atomic instructions Feb 9 10:05:46.729141 kernel: CPU features: detected: Privileged Access Never Feb 9 10:05:46.729149 kernel: CPU features: detected: RAS Extension Support Feb 9 10:05:46.729156 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 10:05:46.729162 kernel: CPU: All CPU(s) started at EL1 Feb 9 10:05:46.729169 kernel: alternatives: patching kernel code Feb 9 10:05:46.729176 kernel: devtmpfs: initialized Feb 9 10:05:46.729183 kernel: KASLR enabled Feb 9 10:05:46.729190 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 10:05:46.729196 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 10:05:46.729203 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 10:05:46.729209 kernel: SMBIOS 3.0.0 present. Feb 9 10:05:46.729216 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 10:05:46.729222 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 10:05:46.729229 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 10:05:46.729235 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 10:05:46.729243 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 10:05:46.729250 kernel: audit: initializing netlink subsys (disabled) Feb 9 10:05:46.729257 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Feb 9 10:05:46.729263 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 10:05:46.729270 kernel: cpuidle: using governor menu Feb 9 10:05:46.729277 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 10:05:46.729283 kernel: ASID allocator initialised with 32768 entries Feb 9 10:05:46.729290 kernel: ACPI: bus type PCI registered Feb 9 10:05:46.729297 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 10:05:46.729304 kernel: Serial: AMBA PL011 UART driver Feb 9 10:05:46.729311 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 10:05:46.729317 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 10:05:46.729324 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 10:05:46.729331 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 10:05:46.729337 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 10:05:46.729344 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 10:05:46.729350 kernel: ACPI: Added _OSI(Module Device) Feb 9 10:05:46.729357 kernel: ACPI: Added _OSI(Processor Device) Feb 9 10:05:46.729364 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 10:05:46.729371 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 10:05:46.729377 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 10:05:46.729384 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 10:05:46.729390 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 10:05:46.729397 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 10:05:46.729403 kernel: ACPI: Interpreter enabled Feb 9 10:05:46.729410 kernel: ACPI: Using GIC for interrupt routing Feb 9 10:05:46.729416 kernel: ACPI: MCFG table detected, 1 entries Feb 9 10:05:46.729424 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 10:05:46.729430 kernel: printk: console [ttyAMA0] enabled Feb 9 10:05:46.729437 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 10:05:46.729557 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 10:05:46.729621 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 10:05:46.729679 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 10:05:46.729744 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 10:05:46.729805 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 10:05:46.729814 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 10:05:46.729820 kernel: PCI host bridge to bus 0000:00 Feb 9 10:05:46.729890 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 10:05:46.729944 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 10:05:46.730013 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 10:05:46.730068 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 10:05:46.730143 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 10:05:46.730217 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 10:05:46.730279 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 10:05:46.730338 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 10:05:46.730400 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:05:46.730461 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 10:05:46.730523 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 10:05:46.730585 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 10:05:46.730639 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 10:05:46.730692 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 10:05:46.730744 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 10:05:46.730753 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 10:05:46.730760 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 10:05:46.730766 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 10:05:46.730774 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 10:05:46.730781 kernel: iommu: Default domain type: Translated Feb 9 10:05:46.730788 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 10:05:46.730794 kernel: vgaarb: loaded Feb 9 10:05:46.730800 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 10:05:46.730807 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 10:05:46.730814 kernel: PTP clock support registered Feb 9 10:05:46.730820 kernel: Registered efivars operations Feb 9 10:05:46.730827 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 10:05:46.730833 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 10:05:46.730841 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 10:05:46.730848 kernel: pnp: PnP ACPI init Feb 9 10:05:46.730911 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 10:05:46.730921 kernel: pnp: PnP ACPI: found 1 devices Feb 9 10:05:46.730927 kernel: NET: Registered PF_INET protocol family Feb 9 10:05:46.730934 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 10:05:46.730941 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 10:05:46.730947 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 10:05:46.730956 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 10:05:46.730962 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 10:05:46.730969 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 10:05:46.730988 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:05:46.730996 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 10:05:46.731003 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 10:05:46.731009 kernel: PCI: CLS 0 bytes, default 64 Feb 9 10:05:46.731016 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 10:05:46.731024 kernel: kvm [1]: HYP mode not available Feb 9 10:05:46.731031 kernel: Initialise system trusted keyrings Feb 9 10:05:46.731038 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 10:05:46.731044 kernel: Key type asymmetric registered Feb 9 10:05:46.731051 kernel: Asymmetric key parser 'x509' registered Feb 9 10:05:46.731057 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 10:05:46.731064 kernel: io scheduler mq-deadline registered Feb 9 10:05:46.731070 kernel: io scheduler kyber registered Feb 9 10:05:46.731077 kernel: io scheduler bfq registered Feb 9 10:05:46.731083 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 10:05:46.731091 kernel: ACPI: button: Power Button [PWRB] Feb 9 10:05:46.731098 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 10:05:46.731164 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 10:05:46.731173 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 10:05:46.731180 kernel: thunder_xcv, ver 1.0 Feb 9 10:05:46.731186 kernel: thunder_bgx, ver 1.0 Feb 9 10:05:46.731193 kernel: nicpf, ver 1.0 Feb 9 10:05:46.731199 kernel: nicvf, ver 1.0 Feb 9 10:05:46.731269 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 10:05:46.731328 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T10:05:46 UTC (1707473146) Feb 9 10:05:46.731337 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 10:05:46.731344 kernel: NET: Registered PF_INET6 protocol family Feb 9 10:05:46.731350 kernel: Segment Routing with IPv6 Feb 9 10:05:46.731357 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 10:05:46.731363 kernel: NET: Registered PF_PACKET protocol family Feb 9 10:05:46.731370 kernel: Key type dns_resolver registered Feb 9 10:05:46.731376 kernel: registered taskstats version 1 Feb 9 10:05:46.731384 kernel: Loading compiled-in X.509 certificates Feb 9 10:05:46.731391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 10:05:46.731397 kernel: Key type .fscrypt registered Feb 9 10:05:46.731404 kernel: Key type fscrypt-provisioning registered Feb 9 10:05:46.731410 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 10:05:46.731417 kernel: ima: Allocated hash algorithm: sha1 Feb 9 10:05:46.731423 kernel: ima: No architecture policies found Feb 9 10:05:46.731430 kernel: Freeing unused kernel memory: 34688K Feb 9 10:05:46.731436 kernel: Run /init as init process Feb 9 10:05:46.731445 kernel: with arguments: Feb 9 10:05:46.731451 kernel: /init Feb 9 10:05:46.731457 kernel: with environment: Feb 9 10:05:46.731463 kernel: HOME=/ Feb 9 10:05:46.731470 kernel: TERM=linux Feb 9 10:05:46.731476 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 10:05:46.731485 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:05:46.731493 systemd[1]: Detected virtualization kvm. Feb 9 10:05:46.731502 systemd[1]: Detected architecture arm64. Feb 9 10:05:46.731509 systemd[1]: Running in initrd. Feb 9 10:05:46.731515 systemd[1]: No hostname configured, using default hostname. Feb 9 10:05:46.731522 systemd[1]: Hostname set to . Feb 9 10:05:46.731529 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:05:46.731536 systemd[1]: Queued start job for default target initrd.target. Feb 9 10:05:46.731543 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:05:46.731550 systemd[1]: Reached target cryptsetup.target. Feb 9 10:05:46.731558 systemd[1]: Reached target paths.target. Feb 9 10:05:46.731565 systemd[1]: Reached target slices.target. Feb 9 10:05:46.731571 systemd[1]: Reached target swap.target. Feb 9 10:05:46.731578 systemd[1]: Reached target timers.target. Feb 9 10:05:46.731585 systemd[1]: Listening on iscsid.socket. Feb 9 10:05:46.731620 systemd[1]: Listening on iscsiuio.socket. Feb 9 10:05:46.731628 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 10:05:46.731638 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 10:05:46.731645 systemd[1]: Listening on systemd-journald.socket. Feb 9 10:05:46.731652 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:05:46.731659 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:05:46.731666 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:05:46.731673 systemd[1]: Reached target sockets.target. Feb 9 10:05:46.731680 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:05:46.731686 systemd[1]: Finished network-cleanup.service. Feb 9 10:05:46.731693 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 10:05:46.731702 systemd[1]: Starting systemd-journald.service... Feb 9 10:05:46.731714 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:05:46.731730 systemd[1]: Starting systemd-resolved.service... Feb 9 10:05:46.731737 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 10:05:46.731744 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:05:46.731751 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 10:05:46.731758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 10:05:46.731765 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 10:05:46.731773 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 10:05:46.731781 kernel: audit: type=1130 audit(1707473146.727:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.731791 systemd-journald[291]: Journal started Feb 9 10:05:46.731831 systemd-journald[291]: Runtime Journal (/run/log/journal/d586a071203749c3863ff0ccbbd3868d) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:05:46.731860 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 10:05:46.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.719731 systemd-modules-load[292]: Inserted module 'overlay' Feb 9 10:05:46.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.733927 systemd-resolved[293]: Positive Trust Anchors: Feb 9 10:05:46.738477 kernel: audit: type=1130 audit(1707473146.733:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.738494 systemd[1]: Started systemd-journald.service. Feb 9 10:05:46.733934 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:05:46.743216 kernel: audit: type=1130 audit(1707473146.738:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.733964 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:05:46.750243 kernel: audit: type=1130 audit(1707473146.742:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.738126 systemd-resolved[293]: Defaulting to hostname 'linux'. Feb 9 10:05:46.739659 systemd[1]: Started systemd-resolved.service. Feb 9 10:05:46.743780 systemd[1]: Reached target nss-lookup.target. Feb 9 10:05:46.755426 kernel: audit: type=1130 audit(1707473146.751:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.755443 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 10:05:46.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.751077 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 10:05:46.752962 systemd[1]: Starting dracut-cmdline.service... Feb 9 10:05:46.757461 systemd-modules-load[292]: Inserted module 'br_netfilter' Feb 9 10:05:46.758128 kernel: Bridge firewalling registered Feb 9 10:05:46.761567 dracut-cmdline[307]: dracut-dracut-053 Feb 9 10:05:46.763707 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 10:05:46.769997 kernel: SCSI subsystem initialized Feb 9 10:05:46.777381 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 10:05:46.777416 kernel: device-mapper: uevent: version 1.0.3 Feb 9 10:05:46.777426 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 10:05:46.779573 systemd-modules-load[292]: Inserted module 'dm_multipath' Feb 9 10:05:46.780273 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:05:46.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.783347 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:05:46.784387 kernel: audit: type=1130 audit(1707473146.779:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.790421 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:05:46.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.794334 kernel: audit: type=1130 audit(1707473146.790:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.816002 kernel: Loading iSCSI transport class v2.0-870. Feb 9 10:05:46.824000 kernel: iscsi: registered transport (tcp) Feb 9 10:05:46.837001 kernel: iscsi: registered transport (qla4xxx) Feb 9 10:05:46.837015 kernel: QLogic iSCSI HBA Driver Feb 9 10:05:46.869822 systemd[1]: Finished dracut-cmdline.service. Feb 9 10:05:46.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.871175 systemd[1]: Starting dracut-pre-udev.service... Feb 9 10:05:46.873625 kernel: audit: type=1130 audit(1707473146.869:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:46.915013 kernel: raid6: neonx8 gen() 13810 MB/s Feb 9 10:05:46.932002 kernel: raid6: neonx8 xor() 10830 MB/s Feb 9 10:05:46.948991 kernel: raid6: neonx4 gen() 13571 MB/s Feb 9 10:05:46.966000 kernel: raid6: neonx4 xor() 10900 MB/s Feb 9 10:05:46.983000 kernel: raid6: neonx2 gen() 12977 MB/s Feb 9 10:05:47.000000 kernel: raid6: neonx2 xor() 10249 MB/s Feb 9 10:05:47.016990 kernel: raid6: neonx1 gen() 10520 MB/s Feb 9 10:05:47.033991 kernel: raid6: neonx1 xor() 8799 MB/s Feb 9 10:05:47.051001 kernel: raid6: int64x8 gen() 6292 MB/s Feb 9 10:05:47.068000 kernel: raid6: int64x8 xor() 3550 MB/s Feb 9 10:05:47.085001 kernel: raid6: int64x4 gen() 7305 MB/s Feb 9 10:05:47.101999 kernel: raid6: int64x4 xor() 3858 MB/s Feb 9 10:05:47.119001 kernel: raid6: int64x2 gen() 6155 MB/s Feb 9 10:05:47.135999 kernel: raid6: int64x2 xor() 3327 MB/s Feb 9 10:05:47.153000 kernel: raid6: int64x1 gen() 5049 MB/s Feb 9 10:05:47.170176 kernel: raid6: int64x1 xor() 2647 MB/s Feb 9 10:05:47.170196 kernel: raid6: using algorithm neonx8 gen() 13810 MB/s Feb 9 10:05:47.170217 kernel: raid6: .... xor() 10830 MB/s, rmw enabled Feb 9 10:05:47.170233 kernel: raid6: using neon recovery algorithm Feb 9 10:05:47.180998 kernel: xor: measuring software checksum speed Feb 9 10:05:47.181021 kernel: 8regs : 17308 MB/sec Feb 9 10:05:47.181989 kernel: 32regs : 20760 MB/sec Feb 9 10:05:47.183000 kernel: arm64_neon : 27997 MB/sec Feb 9 10:05:47.183020 kernel: xor: using function: arm64_neon (27997 MB/sec) Feb 9 10:05:47.236005 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 10:05:47.246143 systemd[1]: Finished dracut-pre-udev.service. Feb 9 10:05:47.247743 systemd[1]: Starting systemd-udevd.service... Feb 9 10:05:47.250381 kernel: audit: type=1130 audit(1707473147.245:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:47.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:47.246000 audit: BPF prog-id=7 op=LOAD Feb 9 10:05:47.246000 audit: BPF prog-id=8 op=LOAD Feb 9 10:05:47.262962 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 9 10:05:47.266277 systemd[1]: Started systemd-udevd.service. Feb 9 10:05:47.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:47.268336 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 10:05:47.280243 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Feb 9 10:05:47.306150 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 10:05:47.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:47.307611 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:05:47.340594 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:05:47.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:47.367006 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 10:05:47.376184 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 10:05:47.376206 kernel: GPT:9289727 != 19775487 Feb 9 10:05:47.376215 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 10:05:47.376223 kernel: GPT:9289727 != 19775487 Feb 9 10:05:47.377285 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 10:05:47.377297 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:05:47.391462 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 10:05:47.393642 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (541) Feb 9 10:05:47.397115 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 10:05:47.397969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 10:05:47.401826 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 10:05:47.405246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:05:47.406755 systemd[1]: Starting disk-uuid.service... Feb 9 10:05:47.412236 disk-uuid[561]: Primary Header is updated. Feb 9 10:05:47.412236 disk-uuid[561]: Secondary Entries is updated. Feb 9 10:05:47.412236 disk-uuid[561]: Secondary Header is updated. Feb 9 10:05:47.414998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:05:48.425679 disk-uuid[562]: The operation has completed successfully. Feb 9 10:05:48.427150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 10:05:48.448267 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 10:05:48.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.448354 systemd[1]: Finished disk-uuid.service. Feb 9 10:05:48.452210 systemd[1]: Starting verity-setup.service... Feb 9 10:05:48.470005 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 10:05:48.489395 systemd[1]: Found device dev-mapper-usr.device. Feb 9 10:05:48.491377 systemd[1]: Mounting sysusr-usr.mount... Feb 9 10:05:48.493082 systemd[1]: Finished verity-setup.service. Feb 9 10:05:48.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.538674 systemd[1]: Mounted sysusr-usr.mount. Feb 9 10:05:48.539697 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 10:05:48.539373 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 10:05:48.539961 systemd[1]: Starting ignition-setup.service... Feb 9 10:05:48.541682 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 10:05:48.548062 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:05:48.548093 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:05:48.548103 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:05:48.554922 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 10:05:48.560618 systemd[1]: Finished ignition-setup.service. Feb 9 10:05:48.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.561965 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 10:05:48.616573 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 10:05:48.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.617000 audit: BPF prog-id=9 op=LOAD Feb 9 10:05:48.618649 systemd[1]: Starting systemd-networkd.service... Feb 9 10:05:48.639854 systemd-networkd[740]: lo: Link UP Feb 9 10:05:48.639869 systemd-networkd[740]: lo: Gained carrier Feb 9 10:05:48.640249 systemd-networkd[740]: Enumeration completed Feb 9 10:05:48.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.640333 systemd[1]: Started systemd-networkd.service. Feb 9 10:05:48.640419 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:05:48.641231 systemd[1]: Reached target network.target. Feb 9 10:05:48.641394 systemd-networkd[740]: eth0: Link UP Feb 9 10:05:48.641397 systemd-networkd[740]: eth0: Gained carrier Feb 9 10:05:48.642902 systemd[1]: Starting iscsiuio.service... Feb 9 10:05:48.652901 ignition[646]: Ignition 2.14.0 Feb 9 10:05:48.652911 ignition[646]: Stage: fetch-offline Feb 9 10:05:48.652949 ignition[646]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:48.652959 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:48.653121 ignition[646]: parsed url from cmdline: "" Feb 9 10:05:48.655240 systemd[1]: Started iscsiuio.service. Feb 9 10:05:48.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.653124 ignition[646]: no config URL provided Feb 9 10:05:48.653129 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 10:05:48.657428 systemd[1]: Starting iscsid.service... Feb 9 10:05:48.653136 ignition[646]: no config at "/usr/lib/ignition/user.ign" Feb 9 10:05:48.662678 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:05:48.662678 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 10:05:48.662678 iscsid[747]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 10:05:48.662678 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 10:05:48.662678 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 10:05:48.662678 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 10:05:48.662678 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 10:05:48.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.660714 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:05:48.653154 ignition[646]: op(1): [started] loading QEMU firmware config module Feb 9 10:05:48.666731 systemd[1]: Started iscsid.service. Feb 9 10:05:48.653159 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 10:05:48.668086 systemd[1]: Starting dracut-initqueue.service... Feb 9 10:05:48.658655 ignition[646]: op(1): [finished] loading QEMU firmware config module Feb 9 10:05:48.658676 ignition[646]: QEMU firmware config was not found. Ignoring... Feb 9 10:05:48.678529 systemd[1]: Finished dracut-initqueue.service. Feb 9 10:05:48.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.679520 systemd[1]: Reached target remote-fs-pre.target. Feb 9 10:05:48.680722 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:05:48.682162 systemd[1]: Reached target remote-fs.target. Feb 9 10:05:48.684124 systemd[1]: Starting dracut-pre-mount.service... Feb 9 10:05:48.691303 systemd[1]: Finished dracut-pre-mount.service. Feb 9 10:05:48.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.696290 ignition[646]: parsing config with SHA512: 4299312447915729333bf9c6edc55e7906fa793d415a727064f1e877d5bf033582160ae6521fcab077bc7f935f3ff4558e726035aaf43d218f2ba45d70e81cb9 Feb 9 10:05:48.717580 unknown[646]: fetched base config from "system" Feb 9 10:05:48.717590 unknown[646]: fetched user config from "qemu" Feb 9 10:05:48.717988 ignition[646]: fetch-offline: fetch-offline passed Feb 9 10:05:48.718046 ignition[646]: Ignition finished successfully Feb 9 10:05:48.719924 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 10:05:48.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.720673 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 10:05:48.721335 systemd[1]: Starting ignition-kargs.service... Feb 9 10:05:48.729664 ignition[761]: Ignition 2.14.0 Feb 9 10:05:48.729673 ignition[761]: Stage: kargs Feb 9 10:05:48.729758 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:48.729768 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:48.730622 ignition[761]: kargs: kargs passed Feb 9 10:05:48.730663 ignition[761]: Ignition finished successfully Feb 9 10:05:48.733966 systemd[1]: Finished ignition-kargs.service. Feb 9 10:05:48.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.735356 systemd[1]: Starting ignition-disks.service... Feb 9 10:05:48.741731 ignition[767]: Ignition 2.14.0 Feb 9 10:05:48.741740 ignition[767]: Stage: disks Feb 9 10:05:48.741828 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:48.741838 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:48.744195 systemd[1]: Finished ignition-disks.service. Feb 9 10:05:48.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.742791 ignition[767]: disks: disks passed Feb 9 10:05:48.745419 systemd[1]: Reached target initrd-root-device.target. Feb 9 10:05:48.742836 ignition[767]: Ignition finished successfully Feb 9 10:05:48.746343 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:05:48.747222 systemd[1]: Reached target local-fs.target. Feb 9 10:05:48.748234 systemd[1]: Reached target sysinit.target. Feb 9 10:05:48.749153 systemd[1]: Reached target basic.target. Feb 9 10:05:48.750897 systemd[1]: Starting systemd-fsck-root.service... Feb 9 10:05:48.761138 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 10:05:48.764848 systemd[1]: Finished systemd-fsck-root.service. Feb 9 10:05:48.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.766275 systemd[1]: Mounting sysroot.mount... Feb 9 10:05:48.776000 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 10:05:48.776206 systemd[1]: Mounted sysroot.mount. Feb 9 10:05:48.776892 systemd[1]: Reached target initrd-root-fs.target. Feb 9 10:05:48.779275 systemd[1]: Mounting sysroot-usr.mount... Feb 9 10:05:48.780105 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 10:05:48.780145 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 10:05:48.780168 systemd[1]: Reached target ignition-diskful.target. Feb 9 10:05:48.781965 systemd[1]: Mounted sysroot-usr.mount. Feb 9 10:05:48.784468 systemd[1]: Starting initrd-setup-root.service... Feb 9 10:05:48.790365 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 10:05:48.794326 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Feb 9 10:05:48.798080 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 10:05:48.801770 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 10:05:48.827556 systemd[1]: Finished initrd-setup-root.service. Feb 9 10:05:48.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.829077 systemd[1]: Starting ignition-mount.service... Feb 9 10:05:48.830356 systemd[1]: Starting sysroot-boot.service... Feb 9 10:05:48.834631 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 10:05:48.842736 ignition[828]: INFO : Ignition 2.14.0 Feb 9 10:05:48.842736 ignition[828]: INFO : Stage: mount Feb 9 10:05:48.844649 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:48.844649 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:48.844649 ignition[828]: INFO : mount: mount passed Feb 9 10:05:48.844649 ignition[828]: INFO : Ignition finished successfully Feb 9 10:05:48.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.846571 systemd[1]: Finished ignition-mount.service. Feb 9 10:05:48.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:48.847742 systemd[1]: Finished sysroot-boot.service. Feb 9 10:05:49.499384 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 10:05:49.504992 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) Feb 9 10:05:49.506306 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 10:05:49.506319 kernel: BTRFS info (device vda6): using free space tree Feb 9 10:05:49.506328 kernel: BTRFS info (device vda6): has skinny extents Feb 9 10:05:49.509370 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 10:05:49.510760 systemd[1]: Starting ignition-files.service... Feb 9 10:05:49.524095 ignition[857]: INFO : Ignition 2.14.0 Feb 9 10:05:49.524095 ignition[857]: INFO : Stage: files Feb 9 10:05:49.525445 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:49.525445 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:49.525445 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Feb 9 10:05:49.528097 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 10:05:49.528097 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 10:05:49.532870 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 10:05:49.533912 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 10:05:49.533912 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 10:05:49.533580 unknown[857]: wrote ssh authorized keys file for user: core Feb 9 10:05:49.537079 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:05:49.537079 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 10:05:49.850619 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 10:05:50.085613 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 10:05:50.088014 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 10:05:50.088014 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:05:50.088014 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 10:05:50.320035 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 10:05:50.471165 systemd-networkd[740]: eth0: Gained IPv6LL Feb 9 10:05:50.527076 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 10:05:50.529376 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 10:05:50.529376 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:05:50.529376 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 10:05:50.573816 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 10:05:50.979461 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 10:05:50.979461 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 10:05:50.982855 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:05:50.982855 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 10:05:51.002992 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 10:05:51.558394 ignition[857]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 10:05:51.560741 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 10:05:51.560741 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 10:05:51.560741 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 10:05:51.560741 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:05:51.560741 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 10:05:51.560741 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:05:51.560741 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(a): [started] processing unit "prepare-cni-plugins.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(a): op(b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(a): op(b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(c): [started] processing unit "prepare-critools.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(c): op(d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(c): op(d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(c): [finished] processing unit "prepare-critools.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 9 10:05:51.560741 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 10:05:51.585416 ignition[857]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:05:51.597697 ignition[857]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 10:05:51.599065 ignition[857]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 10:05:51.599065 ignition[857]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:05:51.599065 ignition[857]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 10:05:51.599065 ignition[857]: INFO : files: files passed Feb 9 10:05:51.599065 ignition[857]: INFO : Ignition finished successfully Feb 9 10:05:51.607590 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 10:05:51.607611 kernel: audit: type=1130 audit(1707473151.600:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.599089 systemd[1]: Finished ignition-files.service. Feb 9 10:05:51.601727 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 10:05:51.612934 kernel: audit: type=1130 audit(1707473151.608:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.612970 kernel: audit: type=1131 audit(1707473151.608:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.613128 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 10:05:51.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.604841 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 10:05:51.618156 kernel: audit: type=1130 audit(1707473151.612:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.618205 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 10:05:51.605549 systemd[1]: Starting ignition-quench.service... Feb 9 10:05:51.608347 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 10:05:51.608432 systemd[1]: Finished ignition-quench.service. Feb 9 10:05:51.611866 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 10:05:51.613719 systemd[1]: Reached target ignition-complete.target. Feb 9 10:05:51.617587 systemd[1]: Starting initrd-parse-etc.service... Feb 9 10:05:51.629256 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 10:05:51.629357 systemd[1]: Finished initrd-parse-etc.service. Feb 9 10:05:51.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.630806 systemd[1]: Reached target initrd-fs.target. Feb 9 10:05:51.635479 kernel: audit: type=1130 audit(1707473151.629:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.635498 kernel: audit: type=1131 audit(1707473151.629:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.635083 systemd[1]: Reached target initrd.target. Feb 9 10:05:51.636104 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 10:05:51.636829 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 10:05:51.646835 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 10:05:51.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.648309 systemd[1]: Starting initrd-cleanup.service... Feb 9 10:05:51.650796 kernel: audit: type=1130 audit(1707473151.646:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.655996 systemd[1]: Stopped target nss-lookup.target. Feb 9 10:05:51.656773 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 10:05:51.657969 systemd[1]: Stopped target timers.target. Feb 9 10:05:51.659041 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 10:05:51.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.659149 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 10:05:51.663224 kernel: audit: type=1131 audit(1707473151.659:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.660188 systemd[1]: Stopped target initrd.target. Feb 9 10:05:51.662874 systemd[1]: Stopped target basic.target. Feb 9 10:05:51.663869 systemd[1]: Stopped target ignition-complete.target. Feb 9 10:05:51.664974 systemd[1]: Stopped target ignition-diskful.target. Feb 9 10:05:51.666087 systemd[1]: Stopped target initrd-root-device.target. Feb 9 10:05:51.667261 systemd[1]: Stopped target remote-fs.target. Feb 9 10:05:51.668349 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 10:05:51.669516 systemd[1]: Stopped target sysinit.target. Feb 9 10:05:51.670469 systemd[1]: Stopped target local-fs.target. Feb 9 10:05:51.671460 systemd[1]: Stopped target local-fs-pre.target. Feb 9 10:05:51.672470 systemd[1]: Stopped target swap.target. Feb 9 10:05:51.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.673406 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 10:05:51.677676 kernel: audit: type=1131 audit(1707473151.673:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.673525 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 10:05:51.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.674567 systemd[1]: Stopped target cryptsetup.target. Feb 9 10:05:51.681375 kernel: audit: type=1131 audit(1707473151.677:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.677173 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 10:05:51.677279 systemd[1]: Stopped dracut-initqueue.service. Feb 9 10:05:51.678407 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 10:05:51.678508 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 10:05:51.681060 systemd[1]: Stopped target paths.target. Feb 9 10:05:51.682025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 10:05:51.687009 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 10:05:51.687867 systemd[1]: Stopped target slices.target. Feb 9 10:05:51.689039 systemd[1]: Stopped target sockets.target. Feb 9 10:05:51.690082 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 10:05:51.690155 systemd[1]: Closed iscsid.socket. Feb 9 10:05:51.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.691108 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 10:05:51.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.691206 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 10:05:51.692392 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 10:05:51.692487 systemd[1]: Stopped ignition-files.service. Feb 9 10:05:51.702619 ignition[897]: INFO : Ignition 2.14.0 Feb 9 10:05:51.702619 ignition[897]: INFO : Stage: umount Feb 9 10:05:51.702619 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 10:05:51.702619 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 10:05:51.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.694218 systemd[1]: Stopping ignition-mount.service... Feb 9 10:05:51.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.709686 ignition[897]: INFO : umount: umount passed Feb 9 10:05:51.709686 ignition[897]: INFO : Ignition finished successfully Feb 9 10:05:51.695321 systemd[1]: Stopping iscsiuio.service... Feb 9 10:05:51.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.697925 systemd[1]: Stopping sysroot-boot.service... Feb 9 10:05:51.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.698473 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 10:05:51.698591 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 10:05:51.699315 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 10:05:51.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.699410 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 10:05:51.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.701425 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 10:05:51.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.701513 systemd[1]: Stopped iscsiuio.service. Feb 9 10:05:51.702486 systemd[1]: Stopped target network.target. Feb 9 10:05:51.703257 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 10:05:51.703328 systemd[1]: Closed iscsiuio.socket. Feb 9 10:05:51.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.704362 systemd[1]: Stopping systemd-networkd.service... Feb 9 10:05:51.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.705487 systemd[1]: Stopping systemd-resolved.service... Feb 9 10:05:51.706734 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 10:05:51.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.706812 systemd[1]: Stopped ignition-mount.service. Feb 9 10:05:51.710126 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 10:05:51.710604 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 10:05:51.710688 systemd[1]: Finished initrd-cleanup.service. Feb 9 10:05:51.711052 systemd-networkd[740]: eth0: DHCPv6 lease lost Feb 9 10:05:51.732000 audit: BPF prog-id=9 op=UNLOAD Feb 9 10:05:51.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.712380 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 10:05:51.712471 systemd[1]: Stopped systemd-networkd.service. Feb 9 10:05:51.714802 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 10:05:51.714831 systemd[1]: Closed systemd-networkd.socket. Feb 9 10:05:51.715718 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 10:05:51.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.739000 audit: BPF prog-id=6 op=UNLOAD Feb 9 10:05:51.715762 systemd[1]: Stopped ignition-disks.service. Feb 9 10:05:51.716797 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 10:05:51.716834 systemd[1]: Stopped ignition-kargs.service. Feb 9 10:05:51.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.717887 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 10:05:51.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.717921 systemd[1]: Stopped ignition-setup.service. Feb 9 10:05:51.719672 systemd[1]: Stopping network-cleanup.service... Feb 9 10:05:51.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.720554 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 10:05:51.720614 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 10:05:51.723682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:05:51.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.723728 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:05:51.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.725231 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 10:05:51.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.725268 systemd[1]: Stopped systemd-modules-load.service. Feb 9 10:05:51.727296 systemd[1]: Stopping systemd-udevd.service... Feb 9 10:05:51.732630 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 10:05:51.733157 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 10:05:51.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.733249 systemd[1]: Stopped systemd-resolved.service. Feb 9 10:05:51.737030 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 10:05:51.737156 systemd[1]: Stopped systemd-udevd.service. Feb 9 10:05:51.738524 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 10:05:51.738605 systemd[1]: Stopped network-cleanup.service. Feb 9 10:05:51.740231 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 10:05:51.740268 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 10:05:51.741635 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 10:05:51.741664 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 10:05:51.742698 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 10:05:51.742741 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 10:05:51.744040 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 10:05:51.744081 systemd[1]: Stopped dracut-cmdline.service. Feb 9 10:05:51.745038 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 10:05:51.745975 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 10:05:51.748388 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 10:05:51.750383 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 10:05:51.750451 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 10:05:51.752345 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 10:05:51.752391 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 10:05:51.754053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 10:05:51.754099 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 10:05:51.756092 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 10:05:51.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.757150 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 10:05:51.757238 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 10:05:51.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:51.776349 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 10:05:51.776444 systemd[1]: Stopped sysroot-boot.service. Feb 9 10:05:51.777666 systemd[1]: Reached target initrd-switch-root.target. Feb 9 10:05:51.779114 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 10:05:51.779160 systemd[1]: Stopped initrd-setup-root.service. Feb 9 10:05:51.781140 systemd[1]: Starting initrd-switch-root.service... Feb 9 10:05:51.787376 systemd[1]: Switching root. Feb 9 10:05:51.808274 iscsid[747]: iscsid shutting down. Feb 9 10:05:51.808930 systemd-journald[291]: Journal stopped Feb 9 10:05:53.859329 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Feb 9 10:05:53.859389 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 10:05:53.859402 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 10:05:53.859417 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 10:05:53.859429 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 10:05:53.859438 kernel: SELinux: policy capability open_perms=1 Feb 9 10:05:53.859447 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 10:05:53.859457 kernel: SELinux: policy capability always_check_network=0 Feb 9 10:05:53.859466 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 10:05:53.859477 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 10:05:53.859488 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 10:05:53.859497 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 10:05:53.859507 systemd[1]: Successfully loaded SELinux policy in 31.275ms. Feb 9 10:05:53.859523 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.972ms. Feb 9 10:05:53.859534 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 10:05:53.859545 systemd[1]: Detected virtualization kvm. Feb 9 10:05:53.859556 systemd[1]: Detected architecture arm64. Feb 9 10:05:53.859566 systemd[1]: Detected first boot. Feb 9 10:05:53.859576 systemd[1]: Initializing machine ID from VM UUID. Feb 9 10:05:53.859586 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 10:05:53.859596 systemd[1]: Populated /etc with preset unit settings. Feb 9 10:05:53.859607 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:05:53.859618 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:05:53.859630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:05:53.859641 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 10:05:53.859652 systemd[1]: Stopped iscsid.service. Feb 9 10:05:53.859663 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 10:05:53.859673 systemd[1]: Stopped initrd-switch-root.service. Feb 9 10:05:53.859692 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 10:05:53.859706 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 10:05:53.859717 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 10:05:53.859727 systemd[1]: Created slice system-getty.slice. Feb 9 10:05:53.859739 systemd[1]: Created slice system-modprobe.slice. Feb 9 10:05:53.859749 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 10:05:53.859760 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 10:05:53.859770 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 10:05:53.859780 systemd[1]: Created slice user.slice. Feb 9 10:05:53.859790 systemd[1]: Started systemd-ask-password-console.path. Feb 9 10:05:53.859801 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 10:05:53.859810 systemd[1]: Set up automount boot.automount. Feb 9 10:05:53.859821 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 10:05:53.859834 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 10:05:53.859844 systemd[1]: Stopped target initrd-fs.target. Feb 9 10:05:53.859854 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 10:05:53.859865 systemd[1]: Reached target integritysetup.target. Feb 9 10:05:53.859875 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 10:05:53.859885 systemd[1]: Reached target remote-fs.target. Feb 9 10:05:53.859895 systemd[1]: Reached target slices.target. Feb 9 10:05:53.859905 systemd[1]: Reached target swap.target. Feb 9 10:05:53.859916 systemd[1]: Reached target torcx.target. Feb 9 10:05:53.859926 systemd[1]: Reached target veritysetup.target. Feb 9 10:05:53.859936 systemd[1]: Listening on systemd-coredump.socket. Feb 9 10:05:53.859946 systemd[1]: Listening on systemd-initctl.socket. Feb 9 10:05:53.859962 systemd[1]: Listening on systemd-networkd.socket. Feb 9 10:05:53.859973 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 10:05:53.859993 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 10:05:53.860005 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 10:05:53.860015 systemd[1]: Mounting dev-hugepages.mount... Feb 9 10:05:53.860026 systemd[1]: Mounting dev-mqueue.mount... Feb 9 10:05:53.860038 systemd[1]: Mounting media.mount... Feb 9 10:05:53.860048 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 10:05:53.860058 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 10:05:53.860068 systemd[1]: Mounting tmp.mount... Feb 9 10:05:53.860078 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 10:05:53.860089 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 10:05:53.860100 systemd[1]: Starting kmod-static-nodes.service... Feb 9 10:05:53.860110 systemd[1]: Starting modprobe@configfs.service... Feb 9 10:05:53.860120 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 10:05:53.860131 systemd[1]: Starting modprobe@drm.service... Feb 9 10:05:53.860141 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 10:05:53.860151 systemd[1]: Starting modprobe@fuse.service... Feb 9 10:05:53.860161 systemd[1]: Starting modprobe@loop.service... Feb 9 10:05:53.860172 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 10:05:53.860182 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 10:05:53.860193 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 10:05:53.860203 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 10:05:53.860215 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 10:05:53.860226 systemd[1]: Stopped systemd-journald.service. Feb 9 10:05:53.860236 kernel: fuse: init (API version 7.34) Feb 9 10:05:53.860245 systemd[1]: Starting systemd-journald.service... Feb 9 10:05:53.860255 systemd[1]: Starting systemd-modules-load.service... Feb 9 10:05:53.860265 kernel: loop: module loaded Feb 9 10:05:53.860277 systemd[1]: Starting systemd-network-generator.service... Feb 9 10:05:53.860289 systemd[1]: Starting systemd-remount-fs.service... Feb 9 10:05:53.860299 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 10:05:53.860309 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 10:05:53.860319 systemd[1]: Stopped verity-setup.service. Feb 9 10:05:53.860329 systemd[1]: Mounted dev-hugepages.mount. Feb 9 10:05:53.860344 systemd[1]: Mounted dev-mqueue.mount. Feb 9 10:05:53.860354 systemd[1]: Mounted media.mount. Feb 9 10:05:53.860364 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 10:05:53.860374 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 10:05:53.860385 systemd[1]: Mounted tmp.mount. Feb 9 10:05:53.860396 systemd[1]: Finished kmod-static-nodes.service. Feb 9 10:05:53.860406 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 10:05:53.860416 systemd[1]: Finished modprobe@configfs.service. Feb 9 10:05:53.860427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 10:05:53.860438 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 10:05:53.860450 systemd-journald[996]: Journal started Feb 9 10:05:53.860490 systemd-journald[996]: Runtime Journal (/run/log/journal/d586a071203749c3863ff0ccbbd3868d) is 6.0M, max 48.7M, 42.6M free. Feb 9 10:05:51.867000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 10:05:52.050000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:05:52.050000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 10:05:52.050000 audit: BPF prog-id=10 op=LOAD Feb 9 10:05:52.050000 audit: BPF prog-id=10 op=UNLOAD Feb 9 10:05:52.050000 audit: BPF prog-id=11 op=LOAD Feb 9 10:05:52.050000 audit: BPF prog-id=11 op=UNLOAD Feb 9 10:05:52.086000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 10:05:52.086000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cd8b2 a1=4000150de0 a2=40001570c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:52.086000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:05:52.087000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 10:05:52.087000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cd989 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:52.087000 audit: CWD cwd="/" Feb 9 10:05:52.087000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:05:52.087000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 10:05:52.087000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 10:05:53.744000 audit: BPF prog-id=12 op=LOAD Feb 9 10:05:53.744000 audit: BPF prog-id=3 op=UNLOAD Feb 9 10:05:53.744000 audit: BPF prog-id=13 op=LOAD Feb 9 10:05:53.744000 audit: BPF prog-id=14 op=LOAD Feb 9 10:05:53.744000 audit: BPF prog-id=4 op=UNLOAD Feb 9 10:05:53.744000 audit: BPF prog-id=5 op=UNLOAD Feb 9 10:05:53.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.757000 audit: BPF prog-id=12 op=UNLOAD Feb 9 10:05:53.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.829000 audit: BPF prog-id=15 op=LOAD Feb 9 10:05:53.829000 audit: BPF prog-id=16 op=LOAD Feb 9 10:05:53.829000 audit: BPF prog-id=17 op=LOAD Feb 9 10:05:53.829000 audit: BPF prog-id=13 op=UNLOAD Feb 9 10:05:53.829000 audit: BPF prog-id=14 op=UNLOAD Feb 9 10:05:53.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.858000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 10:05:53.858000 audit[996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffd259f640 a2=4000 a3=1 items=0 ppid=1 pid=996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:53.858000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 10:05:53.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.743555 systemd[1]: Queued start job for default target multi-user.target. Feb 9 10:05:52.086056 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:05:53.743566 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 10:05:52.086542 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:05:53.746310 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 10:05:52.086561 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:05:52.086591 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 10:05:53.862250 systemd[1]: Started systemd-journald.service. Feb 9 10:05:52.086602 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 10:05:53.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:52.086628 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 10:05:52.086639 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 10:05:52.086823 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 10:05:52.086854 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 10:05:53.862652 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 10:05:52.086867 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 10:05:52.087282 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 10:05:52.087314 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 10:05:52.087331 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 10:05:52.087344 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 10:05:52.087361 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 10:05:52.087376 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 10:05:53.863160 systemd[1]: Finished modprobe@drm.service. Feb 9 10:05:53.500339 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:53.500595 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:53.500693 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:53.500844 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 10:05:53.500891 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 10:05:53.500950 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2024-02-09T10:05:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 10:05:53.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.864330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 10:05:53.864470 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 10:05:53.865418 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 10:05:53.865785 systemd[1]: Finished modprobe@fuse.service. Feb 9 10:05:53.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.866745 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 10:05:53.866848 systemd[1]: Finished modprobe@loop.service. Feb 9 10:05:53.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.869121 systemd[1]: Finished systemd-modules-load.service. Feb 9 10:05:53.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.870261 systemd[1]: Finished systemd-network-generator.service. Feb 9 10:05:53.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.871316 systemd[1]: Finished systemd-remount-fs.service. Feb 9 10:05:53.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.872554 systemd[1]: Reached target network-pre.target. Feb 9 10:05:53.874267 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 10:05:53.875795 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 10:05:53.876912 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 10:05:53.879941 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 10:05:53.881581 systemd[1]: Starting systemd-journal-flush.service... Feb 9 10:05:53.882319 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 10:05:53.883243 systemd[1]: Starting systemd-random-seed.service... Feb 9 10:05:53.883965 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 10:05:53.885221 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:05:53.886927 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 10:05:53.887744 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 10:05:53.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.890804 systemd[1]: Finished systemd-random-seed.service. Feb 9 10:05:53.891644 systemd[1]: Reached target first-boot-complete.target. Feb 9 10:05:53.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.894521 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 10:05:53.896353 systemd[1]: Starting systemd-sysusers.service... Feb 9 10:05:53.900993 systemd-journald[996]: Time spent on flushing to /var/log/journal/d586a071203749c3863ff0ccbbd3868d is 13.067ms for 1007 entries. Feb 9 10:05:53.900993 systemd-journald[996]: System Journal (/var/log/journal/d586a071203749c3863ff0ccbbd3868d) is 8.0M, max 195.6M, 187.6M free. Feb 9 10:05:53.925557 systemd-journald[996]: Received client request to flush runtime journal. Feb 9 10:05:53.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:53.903081 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:05:53.909587 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 10:05:53.910554 systemd[1]: Finished systemd-sysusers.service. Feb 9 10:05:53.928749 udevadm[1034]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 10:05:53.912331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 10:05:53.914168 systemd[1]: Starting systemd-udev-settle.service... Feb 9 10:05:53.926694 systemd[1]: Finished systemd-journal-flush.service. Feb 9 10:05:53.934162 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 10:05:53.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.332235 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 10:05:54.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.332000 audit: BPF prog-id=18 op=LOAD Feb 9 10:05:54.332000 audit: BPF prog-id=19 op=LOAD Feb 9 10:05:54.332000 audit: BPF prog-id=7 op=UNLOAD Feb 9 10:05:54.332000 audit: BPF prog-id=8 op=UNLOAD Feb 9 10:05:54.334378 systemd[1]: Starting systemd-udevd.service... Feb 9 10:05:54.352711 systemd-udevd[1036]: Using default interface naming scheme 'v252'. Feb 9 10:05:54.367555 systemd[1]: Started systemd-udevd.service. Feb 9 10:05:54.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.368000 audit: BPF prog-id=20 op=LOAD Feb 9 10:05:54.369687 systemd[1]: Starting systemd-networkd.service... Feb 9 10:05:54.377000 audit: BPF prog-id=21 op=LOAD Feb 9 10:05:54.377000 audit: BPF prog-id=22 op=LOAD Feb 9 10:05:54.377000 audit: BPF prog-id=23 op=LOAD Feb 9 10:05:54.379491 systemd[1]: Starting systemd-userdbd.service... Feb 9 10:05:54.395456 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 10:05:54.417466 systemd[1]: Started systemd-userdbd.service. Feb 9 10:05:54.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.428449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 10:05:54.471457 systemd-networkd[1044]: lo: Link UP Feb 9 10:05:54.471703 systemd-networkd[1044]: lo: Gained carrier Feb 9 10:05:54.472139 systemd-networkd[1044]: Enumeration completed Feb 9 10:05:54.472320 systemd[1]: Started systemd-networkd.service. Feb 9 10:05:54.472417 systemd-networkd[1044]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 10:05:54.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.473938 systemd-networkd[1044]: eth0: Link UP Feb 9 10:05:54.474147 systemd-networkd[1044]: eth0: Gained carrier Feb 9 10:05:54.474349 systemd[1]: Finished systemd-udev-settle.service. Feb 9 10:05:54.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.476089 systemd[1]: Starting lvm2-activation-early.service... Feb 9 10:05:54.493100 systemd-networkd[1044]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 10:05:54.494682 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:05:54.515729 systemd[1]: Finished lvm2-activation-early.service. Feb 9 10:05:54.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.516502 systemd[1]: Reached target cryptsetup.target. Feb 9 10:05:54.518083 systemd[1]: Starting lvm2-activation.service... Feb 9 10:05:54.521569 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 10:05:54.548804 systemd[1]: Finished lvm2-activation.service. Feb 9 10:05:54.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.549550 systemd[1]: Reached target local-fs-pre.target. Feb 9 10:05:54.550164 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 10:05:54.550190 systemd[1]: Reached target local-fs.target. Feb 9 10:05:54.550718 systemd[1]: Reached target machines.target. Feb 9 10:05:54.552305 systemd[1]: Starting ldconfig.service... Feb 9 10:05:54.553171 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 10:05:54.553244 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:05:54.554383 systemd[1]: Starting systemd-boot-update.service... Feb 9 10:05:54.556266 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 10:05:54.558187 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 10:05:54.559049 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:05:54.559101 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 10:05:54.560096 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 10:05:54.561096 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) Feb 9 10:05:54.562362 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 10:05:54.573777 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 10:05:54.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.573785 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 10:05:54.581322 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 10:05:54.583080 systemd-tmpfiles[1075]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 10:05:54.631012 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 10:05:54.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.652442 systemd-fsck[1082]: fsck.fat 4.2 (2021-01-31) Feb 9 10:05:54.652442 systemd-fsck[1082]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 10:05:54.654033 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 10:05:54.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.714565 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 10:05:54.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.717961 systemd[1]: Finished ldconfig.service. Feb 9 10:05:54.847782 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 10:05:54.849181 systemd[1]: Mounting boot.mount... Feb 9 10:05:54.855468 systemd[1]: Mounted boot.mount. Feb 9 10:05:54.863912 systemd[1]: Finished systemd-boot-update.service. Feb 9 10:05:54.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.912078 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 10:05:54.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.913943 systemd[1]: Starting audit-rules.service... Feb 9 10:05:54.915472 systemd[1]: Starting clean-ca-certificates.service... Feb 9 10:05:54.917169 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 10:05:54.917000 audit: BPF prog-id=24 op=LOAD Feb 9 10:05:54.919689 systemd[1]: Starting systemd-resolved.service... Feb 9 10:05:54.919000 audit: BPF prog-id=25 op=LOAD Feb 9 10:05:54.921755 systemd[1]: Starting systemd-timesyncd.service... Feb 9 10:05:54.925130 systemd[1]: Starting systemd-update-utmp.service... Feb 9 10:05:54.926263 systemd[1]: Finished clean-ca-certificates.service. Feb 9 10:05:54.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.927395 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 10:05:54.930000 audit[1097]: SYSTEM_BOOT pid=1097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.934213 systemd[1]: Finished systemd-update-utmp.service. Feb 9 10:05:54.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.941778 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 10:05:54.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.943688 systemd[1]: Starting systemd-update-done.service... Feb 9 10:05:54.951115 systemd[1]: Finished systemd-update-done.service. Feb 9 10:05:54.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 10:05:54.957000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 10:05:54.957000 audit[1106]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc6be1720 a2=420 a3=0 items=0 ppid=1085 pid=1106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 10:05:54.957000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 10:05:54.958389 augenrules[1106]: No rules Feb 9 10:05:54.958994 systemd[1]: Finished audit-rules.service. Feb 9 10:05:54.969248 systemd-resolved[1089]: Positive Trust Anchors: Feb 9 10:05:54.969258 systemd-resolved[1089]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 10:05:54.969285 systemd-resolved[1089]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 10:05:54.973490 systemd[1]: Started systemd-timesyncd.service. Feb 9 10:05:54.974247 systemd-timesyncd[1095]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 10:05:54.974393 systemd[1]: Reached target time-set.target. Feb 9 10:05:54.974574 systemd-timesyncd[1095]: Initial clock synchronization to Fri 2024-02-09 10:05:54.613465 UTC. Feb 9 10:05:54.984083 systemd-resolved[1089]: Defaulting to hostname 'linux'. Feb 9 10:05:54.985477 systemd[1]: Started systemd-resolved.service. Feb 9 10:05:54.986147 systemd[1]: Reached target network.target. Feb 9 10:05:54.986685 systemd[1]: Reached target nss-lookup.target. Feb 9 10:05:54.987274 systemd[1]: Reached target sysinit.target. Feb 9 10:05:54.987884 systemd[1]: Started motdgen.path. Feb 9 10:05:54.988465 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 10:05:54.989409 systemd[1]: Started logrotate.timer. Feb 9 10:05:54.990187 systemd[1]: Started mdadm.timer. Feb 9 10:05:54.990812 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 10:05:54.991513 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 10:05:54.991545 systemd[1]: Reached target paths.target. Feb 9 10:05:54.992185 systemd[1]: Reached target timers.target. Feb 9 10:05:54.993380 systemd[1]: Listening on dbus.socket. Feb 9 10:05:54.995015 systemd[1]: Starting docker.socket... Feb 9 10:05:54.997924 systemd[1]: Listening on sshd.socket. Feb 9 10:05:54.998608 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:05:54.999053 systemd[1]: Listening on docker.socket. Feb 9 10:05:54.999781 systemd[1]: Reached target sockets.target. Feb 9 10:05:55.000433 systemd[1]: Reached target basic.target. Feb 9 10:05:55.001123 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:05:55.001158 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 10:05:55.002112 systemd[1]: Starting containerd.service... Feb 9 10:05:55.003681 systemd[1]: Starting dbus.service... Feb 9 10:05:55.005270 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 10:05:55.007073 systemd[1]: Starting extend-filesystems.service... Feb 9 10:05:55.007797 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 10:05:55.009011 systemd[1]: Starting motdgen.service... Feb 9 10:05:55.013021 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 10:05:55.016372 jq[1116]: false Feb 9 10:05:55.014545 systemd[1]: Starting prepare-critools.service... Feb 9 10:05:55.016209 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 10:05:55.017822 systemd[1]: Starting sshd-keygen.service... Feb 9 10:05:55.020464 systemd[1]: Starting systemd-logind.service... Feb 9 10:05:55.021159 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 10:05:55.021227 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 10:05:55.021626 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 10:05:55.022297 systemd[1]: Starting update-engine.service... Feb 9 10:05:55.024399 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 10:05:55.026686 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 10:05:55.026861 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 10:05:55.027196 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 10:05:55.027413 jq[1135]: true Feb 9 10:05:55.027326 systemd[1]: Finished motdgen.service. Feb 9 10:05:55.030139 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 10:05:55.030292 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 10:05:55.031522 extend-filesystems[1117]: Found vda Feb 9 10:05:55.031522 extend-filesystems[1117]: Found vda1 Feb 9 10:05:55.036484 extend-filesystems[1117]: Found vda2 Feb 9 10:05:55.036484 extend-filesystems[1117]: Found vda3 Feb 9 10:05:55.036484 extend-filesystems[1117]: Found usr Feb 9 10:05:55.036484 extend-filesystems[1117]: Found vda4 Feb 9 10:05:55.036484 extend-filesystems[1117]: Found vda6 Feb 9 10:05:55.036484 extend-filesystems[1117]: Found vda7 Feb 9 10:05:55.036484 extend-filesystems[1117]: Found vda9 Feb 9 10:05:55.036484 extend-filesystems[1117]: Checking size of /dev/vda9 Feb 9 10:05:55.049617 systemd[1]: Started dbus.service. Feb 9 10:05:55.055838 tar[1137]: ./ Feb 9 10:05:55.055838 tar[1137]: ./loopback Feb 9 10:05:55.049451 dbus-daemon[1115]: [system] SELinux support is enabled Feb 9 10:05:55.056230 tar[1138]: crictl Feb 9 10:05:55.056358 jq[1140]: true Feb 9 10:05:55.051888 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 10:05:55.051913 systemd[1]: Reached target system-config.target. Feb 9 10:05:55.052563 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 10:05:55.052579 systemd[1]: Reached target user-config.target. Feb 9 10:05:55.076924 systemd-logind[1131]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 10:05:55.078336 systemd-logind[1131]: New seat seat0. Feb 9 10:05:55.080515 systemd[1]: Started systemd-logind.service. Feb 9 10:05:55.089580 extend-filesystems[1117]: Resized partition /dev/vda9 Feb 9 10:05:55.091992 extend-filesystems[1167]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 10:05:55.111368 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 10:05:55.118245 tar[1137]: ./bandwidth Feb 9 10:05:55.172598 update_engine[1132]: I0209 10:05:55.171608 1132 main.cc:92] Flatcar Update Engine starting Feb 9 10:05:55.173233 env[1141]: time="2024-02-09T10:05:55.173126814Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 10:05:55.173660 tar[1137]: ./ptp Feb 9 10:05:55.176760 update_engine[1132]: I0209 10:05:55.176738 1132 update_check_scheduler.cc:74] Next update check in 3m44s Feb 9 10:05:55.176774 systemd[1]: Started update-engine.service. Feb 9 10:05:55.177993 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 10:05:55.179684 systemd[1]: Started locksmithd.service. Feb 9 10:05:55.194048 extend-filesystems[1167]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 10:05:55.194048 extend-filesystems[1167]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 10:05:55.194048 extend-filesystems[1167]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 10:05:55.197604 extend-filesystems[1117]: Resized filesystem in /dev/vda9 Feb 9 10:05:55.199086 bash[1169]: Updated "/home/core/.ssh/authorized_keys" Feb 9 10:05:55.194766 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 10:05:55.194927 systemd[1]: Finished extend-filesystems.service. Feb 9 10:05:55.197187 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 10:05:55.208535 env[1141]: time="2024-02-09T10:05:55.208497734Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 10:05:55.208663 env[1141]: time="2024-02-09T10:05:55.208640780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:55.210333 env[1141]: time="2024-02-09T10:05:55.210067800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:05:55.210432 env[1141]: time="2024-02-09T10:05:55.210411759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:55.210805 env[1141]: time="2024-02-09T10:05:55.210780546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:05:55.210945 env[1141]: time="2024-02-09T10:05:55.210920536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:55.211116 env[1141]: time="2024-02-09T10:05:55.211020993Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 10:05:55.211182 env[1141]: time="2024-02-09T10:05:55.211168470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:55.211419 env[1141]: time="2024-02-09T10:05:55.211347000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:55.211934 env[1141]: time="2024-02-09T10:05:55.211862881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 10:05:55.212284 env[1141]: time="2024-02-09T10:05:55.212209476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 10:05:55.212363 env[1141]: time="2024-02-09T10:05:55.212347365Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 10:05:55.212544 env[1141]: time="2024-02-09T10:05:55.212524597Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 10:05:55.212668 env[1141]: time="2024-02-09T10:05:55.212652670Z" level=info msg="metadata content store policy set" policy=shared Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216198333Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216228814Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216241151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216276750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216292525Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216305665Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216317697Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216683466Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216700616Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216712878Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216724948Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216737514Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216876893Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 10:05:55.217023 env[1141]: time="2024-02-09T10:05:55.216956915Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 10:05:55.217328 env[1141]: time="2024-02-09T10:05:55.217199233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 10:05:55.217328 env[1141]: time="2024-02-09T10:05:55.217223946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217328 env[1141]: time="2024-02-09T10:05:55.217237735Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 10:05:55.217382 env[1141]: time="2024-02-09T10:05:55.217329789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217382 env[1141]: time="2024-02-09T10:05:55.217342470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217382 env[1141]: time="2024-02-09T10:05:55.217354005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217382 env[1141]: time="2024-02-09T10:05:55.217364815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217382 env[1141]: time="2024-02-09T10:05:55.217375510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217466 env[1141]: time="2024-02-09T10:05:55.217387045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217466 env[1141]: time="2024-02-09T10:05:55.217398046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217466 env[1141]: time="2024-02-09T10:05:55.217408053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217466 env[1141]: time="2024-02-09T10:05:55.217419550Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 10:05:55.217544 env[1141]: time="2024-02-09T10:05:55.217531160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217564 env[1141]: time="2024-02-09T10:05:55.217550641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217583 env[1141]: time="2024-02-09T10:05:55.217562482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217583 env[1141]: time="2024-02-09T10:05:55.217573711Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 10:05:55.217619 env[1141]: time="2024-02-09T10:05:55.217587462Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 10:05:55.217619 env[1141]: time="2024-02-09T10:05:55.217597584Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 10:05:55.217619 env[1141]: time="2024-02-09T10:05:55.217613168Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 10:05:55.217674 env[1141]: time="2024-02-09T10:05:55.217643496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 10:05:55.217889 env[1141]: time="2024-02-09T10:05:55.217831614Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 10:05:55.217889 env[1141]: time="2024-02-09T10:05:55.217886808Z" level=info msg="Connect containerd service" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.217916983Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.218518425Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.218757955Z" level=info msg="Start subscribing containerd event" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.218794547Z" level=info msg="Start recovering state" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.218845806Z" level=info msg="Start event monitor" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.218861429Z" level=info msg="Start snapshots syncer" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.218869641Z" level=info msg="Start cni network conf syncer for default" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.218876937Z" level=info msg="Start streaming server" Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.219277465Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 10:05:55.220377 env[1141]: time="2024-02-09T10:05:55.219331398Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 10:05:55.219502 systemd[1]: Started containerd.service. Feb 9 10:05:55.224090 env[1141]: time="2024-02-09T10:05:55.219423872Z" level=info msg="containerd successfully booted in 0.052790s" Feb 9 10:05:55.229777 tar[1137]: ./vlan Feb 9 10:05:55.262069 locksmithd[1173]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 10:05:55.262288 tar[1137]: ./host-device Feb 9 10:05:55.293668 tar[1137]: ./tuning Feb 9 10:05:55.322033 tar[1137]: ./vrf Feb 9 10:05:55.351157 tar[1137]: ./sbr Feb 9 10:05:55.379514 tar[1137]: ./tap Feb 9 10:05:55.412708 tar[1137]: ./dhcp Feb 9 10:05:55.491324 systemd[1]: Finished prepare-critools.service. Feb 9 10:05:55.494303 tar[1137]: ./static Feb 9 10:05:55.514594 tar[1137]: ./firewall Feb 9 10:05:55.545104 tar[1137]: ./macvlan Feb 9 10:05:55.572886 tar[1137]: ./dummy Feb 9 10:05:55.600290 tar[1137]: ./bridge Feb 9 10:05:55.630145 tar[1137]: ./ipvlan Feb 9 10:05:55.657489 tar[1137]: ./portmap Feb 9 10:05:55.683498 tar[1137]: ./host-local Feb 9 10:05:55.720683 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 10:05:56.171336 sshd_keygen[1143]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 10:05:56.187526 systemd[1]: Finished sshd-keygen.service. Feb 9 10:05:56.189556 systemd[1]: Starting issuegen.service... Feb 9 10:05:56.193618 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 10:05:56.193755 systemd[1]: Finished issuegen.service. Feb 9 10:05:56.195621 systemd[1]: Starting systemd-user-sessions.service... Feb 9 10:05:56.201005 systemd[1]: Finished systemd-user-sessions.service. Feb 9 10:05:56.202971 systemd[1]: Started getty@tty1.service. Feb 9 10:05:56.204784 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 10:05:56.205617 systemd[1]: Reached target getty.target. Feb 9 10:05:56.206319 systemd[1]: Reached target multi-user.target. Feb 9 10:05:56.208144 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 10:05:56.213860 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 10:05:56.214025 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 10:05:56.214921 systemd[1]: Startup finished in 557ms (kernel) + 5.249s (initrd) + 4.384s (userspace) = 10.190s. Feb 9 10:05:56.423141 systemd-networkd[1044]: eth0: Gained IPv6LL Feb 9 10:05:58.720375 systemd[1]: Created slice system-sshd.slice. Feb 9 10:05:58.721440 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:53348.service. Feb 9 10:05:58.767736 sshd[1200]: Accepted publickey for core from 10.0.0.1 port 53348 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:58.769644 sshd[1200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:58.781918 systemd[1]: Created slice user-500.slice. Feb 9 10:05:58.783037 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 10:05:58.785056 systemd-logind[1131]: New session 1 of user core. Feb 9 10:05:58.790526 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 10:05:58.791847 systemd[1]: Starting user@500.service... Feb 9 10:05:58.794571 (systemd)[1203]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:58.850570 systemd[1203]: Queued start job for default target default.target. Feb 9 10:05:58.851027 systemd[1203]: Reached target paths.target. Feb 9 10:05:58.851046 systemd[1203]: Reached target sockets.target. Feb 9 10:05:58.851057 systemd[1203]: Reached target timers.target. Feb 9 10:05:58.851067 systemd[1203]: Reached target basic.target. Feb 9 10:05:58.851118 systemd[1203]: Reached target default.target. Feb 9 10:05:58.851141 systemd[1203]: Startup finished in 50ms. Feb 9 10:05:58.851347 systemd[1]: Started user@500.service. Feb 9 10:05:58.852363 systemd[1]: Started session-1.scope. Feb 9 10:05:58.901306 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:53350.service. Feb 9 10:05:58.937321 sshd[1212]: Accepted publickey for core from 10.0.0.1 port 53350 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:58.938550 sshd[1212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:58.941774 systemd-logind[1131]: New session 2 of user core. Feb 9 10:05:58.942456 systemd[1]: Started session-2.scope. Feb 9 10:05:58.999033 sshd[1212]: pam_unix(sshd:session): session closed for user core Feb 9 10:05:59.001587 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:53350.service: Deactivated successfully. Feb 9 10:05:59.002199 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 10:05:59.002674 systemd-logind[1131]: Session 2 logged out. Waiting for processes to exit. Feb 9 10:05:59.004022 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:53352.service. Feb 9 10:05:59.004595 systemd-logind[1131]: Removed session 2. Feb 9 10:05:59.040318 sshd[1218]: Accepted publickey for core from 10.0.0.1 port 53352 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:59.041726 sshd[1218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:59.044804 systemd-logind[1131]: New session 3 of user core. Feb 9 10:05:59.045601 systemd[1]: Started session-3.scope. Feb 9 10:05:59.092646 sshd[1218]: pam_unix(sshd:session): session closed for user core Feb 9 10:05:59.096569 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:53352.service: Deactivated successfully. Feb 9 10:05:59.097143 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 10:05:59.097611 systemd-logind[1131]: Session 3 logged out. Waiting for processes to exit. Feb 9 10:05:59.098652 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:53354.service. Feb 9 10:05:59.099268 systemd-logind[1131]: Removed session 3. Feb 9 10:05:59.135139 sshd[1224]: Accepted publickey for core from 10.0.0.1 port 53354 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:59.136297 sshd[1224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:59.139630 systemd-logind[1131]: New session 4 of user core. Feb 9 10:05:59.140409 systemd[1]: Started session-4.scope. Feb 9 10:05:59.191106 sshd[1224]: pam_unix(sshd:session): session closed for user core Feb 9 10:05:59.194170 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:53358.service. Feb 9 10:05:59.194691 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:53354.service: Deactivated successfully. Feb 9 10:05:59.195242 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 10:05:59.195761 systemd-logind[1131]: Session 4 logged out. Waiting for processes to exit. Feb 9 10:05:59.196596 systemd-logind[1131]: Removed session 4. Feb 9 10:05:59.230229 sshd[1229]: Accepted publickey for core from 10.0.0.1 port 53358 ssh2: RSA SHA256:g0U6KM199woo3jVvTXJmbHJGWRxGxX9UCqO141QChXg Feb 9 10:05:59.231413 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:05:59.234464 systemd-logind[1131]: New session 5 of user core. Feb 9 10:05:59.235207 systemd[1]: Started session-5.scope. Feb 9 10:05:59.289827 sudo[1233]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 10:05:59.290071 sudo[1233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 10:05:59.803130 systemd[1]: Reloading. Feb 9 10:05:59.847228 /usr/lib/systemd/system-generators/torcx-generator[1263]: time="2024-02-09T10:05:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:05:59.847258 /usr/lib/systemd/system-generators/torcx-generator[1263]: time="2024-02-09T10:05:59Z" level=info msg="torcx already run" Feb 9 10:05:59.895455 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:05:59.895470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:05:59.911656 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:05:59.966206 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 10:05:59.971203 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 10:05:59.971603 systemd[1]: Reached target network-online.target. Feb 9 10:05:59.972869 systemd[1]: Started kubelet.service. Feb 9 10:05:59.981926 systemd[1]: Starting coreos-metadata.service... Feb 9 10:05:59.987935 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 10:05:59.988120 systemd[1]: Finished coreos-metadata.service. Feb 9 10:06:00.081111 kubelet[1301]: E0209 10:06:00.080998 1301 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 10:06:00.084091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 10:06:00.084224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 10:06:00.239689 systemd[1]: Stopped kubelet.service. Feb 9 10:06:00.251857 systemd[1]: Reloading. Feb 9 10:06:00.297846 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-02-09T10:06:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 10:06:00.298120 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2024-02-09T10:06:00Z" level=info msg="torcx already run" Feb 9 10:06:00.358961 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 10:06:00.358987 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 10:06:00.375468 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 10:06:00.434875 systemd[1]: Started kubelet.service. Feb 9 10:06:00.471357 kubelet[1408]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:06:00.471357 kubelet[1408]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 10:06:00.471357 kubelet[1408]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 10:06:00.471635 kubelet[1408]: I0209 10:06:00.471430 1408 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 10:06:00.976079 kubelet[1408]: I0209 10:06:00.976048 1408 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 10:06:00.976079 kubelet[1408]: I0209 10:06:00.976079 1408 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 10:06:00.976307 kubelet[1408]: I0209 10:06:00.976289 1408 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 10:06:00.979829 kubelet[1408]: I0209 10:06:00.979798 1408 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 10:06:00.981142 kubelet[1408]: W0209 10:06:00.981118 1408 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 10:06:00.982011 kubelet[1408]: I0209 10:06:00.981987 1408 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 10:06:00.982210 kubelet[1408]: I0209 10:06:00.982191 1408 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 10:06:00.982295 kubelet[1408]: I0209 10:06:00.982283 1408 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 10:06:00.982368 kubelet[1408]: I0209 10:06:00.982303 1408 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 10:06:00.982368 kubelet[1408]: I0209 10:06:00.982315 1408 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 10:06:00.982418 kubelet[1408]: I0209 10:06:00.982399 1408 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:06:00.986403 kubelet[1408]: I0209 10:06:00.986382 1408 kubelet.go:405] "Attempting to sync node with API server" Feb 9 10:06:00.986403 kubelet[1408]: I0209 10:06:00.986404 1408 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 10:06:00.986487 kubelet[1408]: I0209 10:06:00.986426 1408 kubelet.go:309] "Adding apiserver pod source" Feb 9 10:06:00.986487 kubelet[1408]: I0209 10:06:00.986439 1408 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 10:06:00.986617 kubelet[1408]: E0209 10:06:00.986599 1408 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:00.986700 kubelet[1408]: E0209 10:06:00.986689 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:00.987029 kubelet[1408]: I0209 10:06:00.987010 1408 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 10:06:00.987531 kubelet[1408]: W0209 10:06:00.987518 1408 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 10:06:00.988251 kubelet[1408]: I0209 10:06:00.988232 1408 server.go:1168] "Started kubelet" Feb 9 10:06:00.989118 kubelet[1408]: I0209 10:06:00.988648 1408 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 10:06:00.989118 kubelet[1408]: I0209 10:06:00.988925 1408 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 10:06:00.989520 kubelet[1408]: I0209 10:06:00.989493 1408 server.go:461] "Adding debug handlers to kubelet server" Feb 9 10:06:00.989679 kubelet[1408]: E0209 10:06:00.989658 1408 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 10:06:00.989727 kubelet[1408]: E0209 10:06:00.989683 1408 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 10:06:00.991584 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 10:06:00.991774 kubelet[1408]: I0209 10:06:00.991753 1408 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 10:06:00.991991 kubelet[1408]: I0209 10:06:00.991912 1408 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 10:06:00.991991 kubelet[1408]: E0209 10:06:00.991924 1408 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.123\" not found" Feb 9 10:06:00.995085 kubelet[1408]: I0209 10:06:00.992780 1408 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 10:06:00.996397 kubelet[1408]: E0209 10:06:00.996296 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb18374226", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 0, 988213798, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 0, 988213798, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:00.996397 kubelet[1408]: E0209 10:06:00.996382 1408 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.123\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 10:06:00.996512 kubelet[1408]: W0209 10:06:00.996450 1408 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 10:06:00.996512 kubelet[1408]: E0209 10:06:00.996480 1408 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 10:06:00.996566 kubelet[1408]: W0209 10:06:00.996530 1408 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 10:06:00.996566 kubelet[1408]: E0209 10:06:00.996548 1408 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.123" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 10:06:00.996608 kubelet[1408]: W0209 10:06:00.996593 1408 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 10:06:00.996608 kubelet[1408]: E0209 10:06:00.996603 1408 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 10:06:01.001878 kubelet[1408]: E0209 10:06:01.001786 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb184d910b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 0, 989675787, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 0, 989675787, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.014498 kubelet[1408]: E0209 10:06:01.014424 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9bf64", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13542756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13542756, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.015450 kubelet[1408]: E0209 10:06:01.015387 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9d2ae", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13547694, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13547694, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.015707 kubelet[1408]: I0209 10:06:01.015687 1408 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 10:06:01.015707 kubelet[1408]: I0209 10:06:01.015706 1408 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 10:06:01.015764 kubelet[1408]: I0209 10:06:01.015723 1408 state_mem.go:36] "Initialized new in-memory state store" Feb 9 10:06:01.016230 kubelet[1408]: E0209 10:06:01.016148 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19ba0757", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13561175, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13561175, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.093319 kubelet[1408]: I0209 10:06:01.093294 1408 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123" Feb 9 10:06:01.094515 kubelet[1408]: E0209 10:06:01.094490 1408 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123" Feb 9 10:06:01.094697 kubelet[1408]: E0209 10:06:01.094632 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9bf64", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13542756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 93248484, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19b9bf64" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.095572 kubelet[1408]: E0209 10:06:01.095517 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9d2ae", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13547694, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 93263063, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19b9d2ae" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.096408 kubelet[1408]: E0209 10:06:01.096341 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19ba0757", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13561175, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 93266159, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19ba0757" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.128799 kubelet[1408]: I0209 10:06:01.128768 1408 policy_none.go:49] "None policy: Start" Feb 9 10:06:01.129491 kubelet[1408]: I0209 10:06:01.129475 1408 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 10:06:01.129559 kubelet[1408]: I0209 10:06:01.129501 1408 state_mem.go:35] "Initializing new in-memory state store" Feb 9 10:06:01.135197 systemd[1]: Created slice kubepods.slice. Feb 9 10:06:01.138788 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 10:06:01.143017 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 10:06:01.148563 kubelet[1408]: I0209 10:06:01.148537 1408 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 10:06:01.148748 kubelet[1408]: I0209 10:06:01.148726 1408 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 10:06:01.149540 kubelet[1408]: E0209 10:06:01.149523 1408 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.123\" not found" Feb 9 10:06:01.153315 kubelet[1408]: E0209 10:06:01.153234 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb21dee109", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 150193929, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 150193929, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.171552 kubelet[1408]: I0209 10:06:01.171519 1408 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 10:06:01.172526 kubelet[1408]: I0209 10:06:01.172495 1408 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 10:06:01.172526 kubelet[1408]: I0209 10:06:01.172528 1408 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 10:06:01.172594 kubelet[1408]: I0209 10:06:01.172548 1408 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 10:06:01.172617 kubelet[1408]: E0209 10:06:01.172596 1408 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 10:06:01.174667 kubelet[1408]: W0209 10:06:01.174646 1408 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 10:06:01.174740 kubelet[1408]: E0209 10:06:01.174675 1408 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 10:06:01.198202 kubelet[1408]: E0209 10:06:01.198140 1408 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.123\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 10:06:01.296732 kubelet[1408]: I0209 10:06:01.296143 1408 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123" Feb 9 10:06:01.297731 kubelet[1408]: E0209 10:06:01.297705 1408 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123" Feb 9 10:06:01.297863 kubelet[1408]: E0209 10:06:01.297782 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9bf64", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13542756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 296092550, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19b9bf64" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.298783 kubelet[1408]: E0209 10:06:01.298711 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9d2ae", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13547694, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 296114654, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19b9d2ae" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.299561 kubelet[1408]: E0209 10:06:01.299509 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19ba0757", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13561175, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 296118063, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19ba0757" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.599498 kubelet[1408]: E0209 10:06:01.599401 1408 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.123\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 10:06:01.699236 kubelet[1408]: I0209 10:06:01.699216 1408 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123" Feb 9 10:06:01.700328 kubelet[1408]: E0209 10:06:01.700295 1408 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.123" Feb 9 10:06:01.700568 kubelet[1408]: E0209 10:06:01.700490 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9bf64", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.123 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13542756, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 699181681, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19b9bf64" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.701413 kubelet[1408]: E0209 10:06:01.701340 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19b9d2ae", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.123 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13547694, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 699191518, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19b9d2ae" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.702192 kubelet[1408]: E0209 10:06:01.702139 1408 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.123.17b229cb19ba0757", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.123", UID:"10.0.0.123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.123 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.123"}, FirstTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 13561175, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 10, 6, 1, 699194339, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.123.17b229cb19ba0757" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 10:06:01.979138 kubelet[1408]: I0209 10:06:01.979014 1408 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 10:06:01.987290 kubelet[1408]: E0209 10:06:01.987257 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:02.351437 kubelet[1408]: E0209 10:06:02.351360 1408 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.123" not found Feb 9 10:06:02.403703 kubelet[1408]: E0209 10:06:02.403666 1408 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.123\" not found" node="10.0.0.123" Feb 9 10:06:02.501333 kubelet[1408]: I0209 10:06:02.501291 1408 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.123" Feb 9 10:06:02.504510 kubelet[1408]: I0209 10:06:02.504473 1408 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.123" Feb 9 10:06:02.512653 kubelet[1408]: I0209 10:06:02.512623 1408 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 10:06:02.513036 env[1141]: time="2024-02-09T10:06:02.512910441Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 10:06:02.513412 kubelet[1408]: I0209 10:06:02.513391 1408 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 10:06:02.947770 sudo[1233]: pam_unix(sudo:session): session closed for user root Feb 9 10:06:02.949446 sshd[1229]: pam_unix(sshd:session): session closed for user core Feb 9 10:06:02.951765 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 10:06:02.952386 systemd-logind[1131]: Session 5 logged out. Waiting for processes to exit. Feb 9 10:06:02.952489 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:53358.service: Deactivated successfully. Feb 9 10:06:02.953382 systemd-logind[1131]: Removed session 5. Feb 9 10:06:02.988368 kubelet[1408]: I0209 10:06:02.988328 1408 apiserver.go:52] "Watching apiserver" Feb 9 10:06:02.988746 kubelet[1408]: E0209 10:06:02.988329 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:02.991094 kubelet[1408]: I0209 10:06:02.991065 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:02.991265 kubelet[1408]: I0209 10:06:02.991247 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:02.994474 kubelet[1408]: I0209 10:06:02.994437 1408 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 10:06:02.995777 systemd[1]: Created slice kubepods-burstable-pod4086108e_9d13_42a8_91b1_64f9e50221a8.slice. Feb 9 10:06:02.999753 kubelet[1408]: I0209 10:06:02.999723 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2aae1ae-982f-4eb1-9c64-15433d19bec9-kube-proxy\") pod \"kube-proxy-lfp6n\" (UID: \"f2aae1ae-982f-4eb1-9c64-15433d19bec9\") " pod="kube-system/kube-proxy-lfp6n" Feb 9 10:06:02.999811 kubelet[1408]: I0209 10:06:02.999761 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-bpf-maps\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:02.999811 kubelet[1408]: I0209 10:06:02.999785 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cni-path\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:02.999811 kubelet[1408]: I0209 10:06:02.999803 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-etc-cni-netd\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:02.999891 kubelet[1408]: I0209 10:06:02.999821 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-hubble-tls\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:02.999891 kubelet[1408]: I0209 10:06:02.999844 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2aae1ae-982f-4eb1-9c64-15433d19bec9-xtables-lock\") pod \"kube-proxy-lfp6n\" (UID: \"f2aae1ae-982f-4eb1-9c64-15433d19bec9\") " pod="kube-system/kube-proxy-lfp6n" Feb 9 10:06:02.999891 kubelet[1408]: I0209 10:06:02.999867 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2aae1ae-982f-4eb1-9c64-15433d19bec9-lib-modules\") pod \"kube-proxy-lfp6n\" (UID: \"f2aae1ae-982f-4eb1-9c64-15433d19bec9\") " pod="kube-system/kube-proxy-lfp6n" Feb 9 10:06:02.999891 kubelet[1408]: I0209 10:06:02.999885 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrts\" (UniqueName: \"kubernetes.io/projected/f2aae1ae-982f-4eb1-9c64-15433d19bec9-kube-api-access-kcrts\") pod \"kube-proxy-lfp6n\" (UID: \"f2aae1ae-982f-4eb1-9c64-15433d19bec9\") " pod="kube-system/kube-proxy-lfp6n" Feb 9 10:06:02.999984 kubelet[1408]: I0209 10:06:02.999902 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-cgroup\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:02.999984 kubelet[1408]: I0209 10:06:02.999920 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4086108e-9d13-42a8-91b1-64f9e50221a8-clustermesh-secrets\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:02.999984 kubelet[1408]: I0209 10:06:02.999938 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ltm9\" (UniqueName: \"kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-kube-api-access-6ltm9\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:02.999984 kubelet[1408]: I0209 10:06:02.999955 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-xtables-lock\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:03.000069 kubelet[1408]: I0209 10:06:02.999988 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-kernel\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:03.000069 kubelet[1408]: I0209 10:06:03.000010 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-run\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:03.000069 kubelet[1408]: I0209 10:06:03.000027 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-hostproc\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:03.000069 kubelet[1408]: I0209 10:06:03.000045 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-lib-modules\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:03.000160 kubelet[1408]: I0209 10:06:03.000084 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-config-path\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:03.000160 kubelet[1408]: I0209 10:06:03.000121 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-net\") pod \"cilium-gcngf\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " pod="kube-system/cilium-gcngf" Feb 9 10:06:03.000160 kubelet[1408]: I0209 10:06:03.000146 1408 reconciler.go:41] "Reconciler: start to sync state" Feb 9 10:06:03.010900 systemd[1]: Created slice kubepods-besteffort-podf2aae1ae_982f_4eb1_9c64_15433d19bec9.slice. Feb 9 10:06:03.310328 kubelet[1408]: E0209 10:06:03.310225 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:03.311189 env[1141]: time="2024-02-09T10:06:03.311147338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcngf,Uid:4086108e-9d13-42a8-91b1-64f9e50221a8,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:03.317547 kubelet[1408]: E0209 10:06:03.317522 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:03.318012 env[1141]: time="2024-02-09T10:06:03.317956663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfp6n,Uid:f2aae1ae-982f-4eb1-9c64-15433d19bec9,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:03.857260 env[1141]: time="2024-02-09T10:06:03.857220937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.858642 env[1141]: time="2024-02-09T10:06:03.858579517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.860792 env[1141]: time="2024-02-09T10:06:03.860749844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.862132 env[1141]: time="2024-02-09T10:06:03.862109252Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.863516 env[1141]: time="2024-02-09T10:06:03.863484687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.864232 env[1141]: time="2024-02-09T10:06:03.864203929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.866407 env[1141]: time="2024-02-09T10:06:03.866380871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.867816 env[1141]: time="2024-02-09T10:06:03.867786629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:03.898233 env[1141]: time="2024-02-09T10:06:03.898158244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:03.898233 env[1141]: time="2024-02-09T10:06:03.898196049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:03.898233 env[1141]: time="2024-02-09T10:06:03.898205816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:03.898538 env[1141]: time="2024-02-09T10:06:03.898469270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cf481330e99342cf16e7919d5b850d59df52d20d4fd9be27776862efe3d04c5 pid=1470 runtime=io.containerd.runc.v2 Feb 9 10:06:03.899185 env[1141]: time="2024-02-09T10:06:03.899137120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:03.899185 env[1141]: time="2024-02-09T10:06:03.899172326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:03.899263 env[1141]: time="2024-02-09T10:06:03.899183116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:03.899414 env[1141]: time="2024-02-09T10:06:03.899375844Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c pid=1469 runtime=io.containerd.runc.v2 Feb 9 10:06:03.920408 systemd[1]: Started cri-containerd-8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c.scope. Feb 9 10:06:03.925921 systemd[1]: Started cri-containerd-3cf481330e99342cf16e7919d5b850d59df52d20d4fd9be27776862efe3d04c5.scope. Feb 9 10:06:03.967925 env[1141]: time="2024-02-09T10:06:03.967326651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfp6n,Uid:f2aae1ae-982f-4eb1-9c64-15433d19bec9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cf481330e99342cf16e7919d5b850d59df52d20d4fd9be27776862efe3d04c5\"" Feb 9 10:06:03.967925 env[1141]: time="2024-02-09T10:06:03.967878566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcngf,Uid:4086108e-9d13-42a8-91b1-64f9e50221a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\"" Feb 9 10:06:03.968375 kubelet[1408]: E0209 10:06:03.968351 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:03.968764 kubelet[1408]: E0209 10:06:03.968655 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:03.969721 env[1141]: time="2024-02-09T10:06:03.969691871Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 10:06:03.989437 kubelet[1408]: E0209 10:06:03.989407 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:04.107214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781376040.mount: Deactivated successfully. Feb 9 10:06:04.990142 kubelet[1408]: E0209 10:06:04.990088 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:05.990261 kubelet[1408]: E0209 10:06:05.990219 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:06.991038 kubelet[1408]: E0209 10:06:06.991007 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:07.120101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959591551.mount: Deactivated successfully. Feb 9 10:06:07.991963 kubelet[1408]: E0209 10:06:07.991923 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:08.992880 kubelet[1408]: E0209 10:06:08.992845 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:09.179375 env[1141]: time="2024-02-09T10:06:09.179330885Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:09.181419 env[1141]: time="2024-02-09T10:06:09.181385260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:09.183444 env[1141]: time="2024-02-09T10:06:09.183414214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:09.184151 env[1141]: time="2024-02-09T10:06:09.184080866Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 10:06:09.185337 env[1141]: time="2024-02-09T10:06:09.185263585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 10:06:09.186431 env[1141]: time="2024-02-09T10:06:09.186383504Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:06:09.195913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550477486.mount: Deactivated successfully. Feb 9 10:06:09.199962 env[1141]: time="2024-02-09T10:06:09.199930928Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\"" Feb 9 10:06:09.200714 env[1141]: time="2024-02-09T10:06:09.200685922Z" level=info msg="StartContainer for \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\"" Feb 9 10:06:09.217849 systemd[1]: Started cri-containerd-41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0.scope. Feb 9 10:06:09.256490 env[1141]: time="2024-02-09T10:06:09.253002970Z" level=info msg="StartContainer for \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\" returns successfully" Feb 9 10:06:09.291223 systemd[1]: cri-containerd-41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0.scope: Deactivated successfully. Feb 9 10:06:09.403582 env[1141]: time="2024-02-09T10:06:09.403533948Z" level=info msg="shim disconnected" id=41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0 Feb 9 10:06:09.403582 env[1141]: time="2024-02-09T10:06:09.403575775Z" level=warning msg="cleaning up after shim disconnected" id=41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0 namespace=k8s.io Feb 9 10:06:09.403582 env[1141]: time="2024-02-09T10:06:09.403587255Z" level=info msg="cleaning up dead shim" Feb 9 10:06:09.410665 env[1141]: time="2024-02-09T10:06:09.410621483Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1587 runtime=io.containerd.runc.v2\n" Feb 9 10:06:09.993606 kubelet[1408]: E0209 10:06:09.993568 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:10.186926 kubelet[1408]: E0209 10:06:10.186893 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:10.188606 env[1141]: time="2024-02-09T10:06:10.188537264Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:06:10.194675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0-rootfs.mount: Deactivated successfully. Feb 9 10:06:10.202748 env[1141]: time="2024-02-09T10:06:10.202689396Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\"" Feb 9 10:06:10.203305 env[1141]: time="2024-02-09T10:06:10.203276562Z" level=info msg="StartContainer for \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\"" Feb 9 10:06:10.220049 systemd[1]: Started cri-containerd-7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38.scope. Feb 9 10:06:10.258574 env[1141]: time="2024-02-09T10:06:10.258145062Z" level=info msg="StartContainer for \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\" returns successfully" Feb 9 10:06:10.273572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 10:06:10.273774 systemd[1]: Stopped systemd-sysctl.service. Feb 9 10:06:10.273942 systemd[1]: Stopping systemd-sysctl.service... Feb 9 10:06:10.275578 systemd[1]: Starting systemd-sysctl.service... Feb 9 10:06:10.277241 systemd[1]: cri-containerd-7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38.scope: Deactivated successfully. Feb 9 10:06:10.282433 systemd[1]: Finished systemd-sysctl.service. Feb 9 10:06:10.340213 env[1141]: time="2024-02-09T10:06:10.340158999Z" level=info msg="shim disconnected" id=7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38 Feb 9 10:06:10.340213 env[1141]: time="2024-02-09T10:06:10.340202652Z" level=warning msg="cleaning up after shim disconnected" id=7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38 namespace=k8s.io Feb 9 10:06:10.340213 env[1141]: time="2024-02-09T10:06:10.340213227Z" level=info msg="cleaning up dead shim" Feb 9 10:06:10.346321 env[1141]: time="2024-02-09T10:06:10.346279495Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1650 runtime=io.containerd.runc.v2\n" Feb 9 10:06:10.740372 env[1141]: time="2024-02-09T10:06:10.740267104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:10.741935 env[1141]: time="2024-02-09T10:06:10.741904919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:10.744049 env[1141]: time="2024-02-09T10:06:10.744023192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:10.745884 env[1141]: time="2024-02-09T10:06:10.745857166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:10.746467 env[1141]: time="2024-02-09T10:06:10.746440873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 10:06:10.748140 env[1141]: time="2024-02-09T10:06:10.748108307Z" level=info msg="CreateContainer within sandbox \"3cf481330e99342cf16e7919d5b850d59df52d20d4fd9be27776862efe3d04c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 10:06:10.758146 env[1141]: time="2024-02-09T10:06:10.758109885Z" level=info msg="CreateContainer within sandbox \"3cf481330e99342cf16e7919d5b850d59df52d20d4fd9be27776862efe3d04c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"408691e6a5cf10092c95ed24495da4271389afc479f4307982d4906e1237f33d\"" Feb 9 10:06:10.758791 env[1141]: time="2024-02-09T10:06:10.758761536Z" level=info msg="StartContainer for \"408691e6a5cf10092c95ed24495da4271389afc479f4307982d4906e1237f33d\"" Feb 9 10:06:10.772920 systemd[1]: Started cri-containerd-408691e6a5cf10092c95ed24495da4271389afc479f4307982d4906e1237f33d.scope. Feb 9 10:06:10.807523 env[1141]: time="2024-02-09T10:06:10.807474436Z" level=info msg="StartContainer for \"408691e6a5cf10092c95ed24495da4271389afc479f4307982d4906e1237f33d\" returns successfully" Feb 9 10:06:10.993808 kubelet[1408]: E0209 10:06:10.993679 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:11.189452 kubelet[1408]: E0209 10:06:11.189096 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:11.191354 kubelet[1408]: E0209 10:06:11.191331 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:11.194487 systemd[1]: run-containerd-runc-k8s.io-7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38-runc.IcrEDe.mount: Deactivated successfully. Feb 9 10:06:11.194576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38-rootfs.mount: Deactivated successfully. Feb 9 10:06:11.194626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060427135.mount: Deactivated successfully. Feb 9 10:06:11.196731 env[1141]: time="2024-02-09T10:06:11.196659046Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:06:11.198644 kubelet[1408]: I0209 10:06:11.198615 1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lfp6n" podStartSLOduration=2.421230908 podCreationTimestamp="2024-02-09 10:06:02 +0000 UTC" firstStartedPulling="2024-02-09 10:06:03.969351113 +0000 UTC m=+3.531667002" lastFinishedPulling="2024-02-09 10:06:10.746693964 +0000 UTC m=+10.309009853" observedRunningTime="2024-02-09 10:06:11.198288963 +0000 UTC m=+10.760604852" watchObservedRunningTime="2024-02-09 10:06:11.198573759 +0000 UTC m=+10.760889648" Feb 9 10:06:11.205883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183719048.mount: Deactivated successfully. Feb 9 10:06:11.208230 env[1141]: time="2024-02-09T10:06:11.208191127Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\"" Feb 9 10:06:11.208909 env[1141]: time="2024-02-09T10:06:11.208882069Z" level=info msg="StartContainer for \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\"" Feb 9 10:06:11.226127 systemd[1]: Started cri-containerd-0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740.scope. Feb 9 10:06:11.264132 env[1141]: time="2024-02-09T10:06:11.263709079Z" level=info msg="StartContainer for \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\" returns successfully" Feb 9 10:06:11.277371 systemd[1]: cri-containerd-0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740.scope: Deactivated successfully. Feb 9 10:06:11.367079 env[1141]: time="2024-02-09T10:06:11.367034726Z" level=info msg="shim disconnected" id=0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740 Feb 9 10:06:11.367079 env[1141]: time="2024-02-09T10:06:11.367076343Z" level=warning msg="cleaning up after shim disconnected" id=0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740 namespace=k8s.io Feb 9 10:06:11.367079 env[1141]: time="2024-02-09T10:06:11.367086091Z" level=info msg="cleaning up dead shim" Feb 9 10:06:11.374206 env[1141]: time="2024-02-09T10:06:11.374160589Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1865 runtime=io.containerd.runc.v2\n" Feb 9 10:06:11.994078 kubelet[1408]: E0209 10:06:11.994034 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:12.193875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740-rootfs.mount: Deactivated successfully. Feb 9 10:06:12.194739 kubelet[1408]: E0209 10:06:12.194664 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:12.194992 kubelet[1408]: E0209 10:06:12.194964 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:12.196564 env[1141]: time="2024-02-09T10:06:12.196517597Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:06:12.210319 env[1141]: time="2024-02-09T10:06:12.210267092Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\"" Feb 9 10:06:12.210751 env[1141]: time="2024-02-09T10:06:12.210703290Z" level=info msg="StartContainer for \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\"" Feb 9 10:06:12.225547 systemd[1]: Started cri-containerd-065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396.scope. Feb 9 10:06:12.259357 systemd[1]: cri-containerd-065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396.scope: Deactivated successfully. Feb 9 10:06:12.261282 env[1141]: time="2024-02-09T10:06:12.261245243Z" level=info msg="StartContainer for \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\" returns successfully" Feb 9 10:06:12.278946 env[1141]: time="2024-02-09T10:06:12.278904680Z" level=info msg="shim disconnected" id=065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396 Feb 9 10:06:12.279168 env[1141]: time="2024-02-09T10:06:12.278950187Z" level=warning msg="cleaning up after shim disconnected" id=065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396 namespace=k8s.io Feb 9 10:06:12.279168 env[1141]: time="2024-02-09T10:06:12.278960141Z" level=info msg="cleaning up dead shim" Feb 9 10:06:12.285093 env[1141]: time="2024-02-09T10:06:12.285057128Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1920 runtime=io.containerd.runc.v2\n" Feb 9 10:06:12.994272 kubelet[1408]: E0209 10:06:12.994246 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:13.193998 systemd[1]: run-containerd-runc-k8s.io-065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396-runc.W1c5Jr.mount: Deactivated successfully. Feb 9 10:06:13.194106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396-rootfs.mount: Deactivated successfully. Feb 9 10:06:13.197661 kubelet[1408]: E0209 10:06:13.197639 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:13.200019 env[1141]: time="2024-02-09T10:06:13.199971269Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:06:13.214124 env[1141]: time="2024-02-09T10:06:13.214075588Z" level=info msg="CreateContainer within sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\"" Feb 9 10:06:13.214499 env[1141]: time="2024-02-09T10:06:13.214475671Z" level=info msg="StartContainer for \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\"" Feb 9 10:06:13.231555 systemd[1]: Started cri-containerd-139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada.scope. Feb 9 10:06:13.274071 env[1141]: time="2024-02-09T10:06:13.273712368Z" level=info msg="StartContainer for \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\" returns successfully" Feb 9 10:06:13.379005 kubelet[1408]: I0209 10:06:13.378949 1408 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 10:06:13.528020 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:06:13.776001 kernel: Initializing XFRM netlink socket Feb 9 10:06:13.779766 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 10:06:13.995334 kubelet[1408]: E0209 10:06:13.995290 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:14.194073 systemd[1]: run-containerd-runc-k8s.io-139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada-runc.zf1pn5.mount: Deactivated successfully. Feb 9 10:06:14.202629 kubelet[1408]: E0209 10:06:14.202569 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:14.220453 kubelet[1408]: I0209 10:06:14.220420 1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gcngf" podStartSLOduration=7.004612455 podCreationTimestamp="2024-02-09 10:06:02 +0000 UTC" firstStartedPulling="2024-02-09 10:06:03.969284009 +0000 UTC m=+3.531599898" lastFinishedPulling="2024-02-09 10:06:09.185056674 +0000 UTC m=+8.747372563" observedRunningTime="2024-02-09 10:06:14.219456803 +0000 UTC m=+13.781772692" watchObservedRunningTime="2024-02-09 10:06:14.22038512 +0000 UTC m=+13.782701009" Feb 9 10:06:14.278902 kubelet[1408]: I0209 10:06:14.278866 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:14.283766 systemd[1]: Created slice kubepods-besteffort-podaf2ce7c0_0840_44cf_8c5e_f1e1722e0d76.slice. Feb 9 10:06:14.454392 kubelet[1408]: I0209 10:06:14.454272 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkw4\" (UniqueName: \"kubernetes.io/projected/af2ce7c0-0840-44cf-8c5e-f1e1722e0d76-kube-api-access-6bkw4\") pod \"nginx-deployment-845c78c8b9-b7nbk\" (UID: \"af2ce7c0-0840-44cf-8c5e-f1e1722e0d76\") " pod="default/nginx-deployment-845c78c8b9-b7nbk" Feb 9 10:06:14.585937 env[1141]: time="2024-02-09T10:06:14.585894739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-b7nbk,Uid:af2ce7c0-0840-44cf-8c5e-f1e1722e0d76,Namespace:default,Attempt:0,}" Feb 9 10:06:14.995944 kubelet[1408]: E0209 10:06:14.995902 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:15.203905 kubelet[1408]: E0209 10:06:15.203856 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:15.380363 systemd-networkd[1044]: cilium_host: Link UP Feb 9 10:06:15.380795 systemd-networkd[1044]: cilium_net: Link UP Feb 9 10:06:15.383193 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 10:06:15.383258 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 10:06:15.382886 systemd-networkd[1044]: cilium_net: Gained carrier Feb 9 10:06:15.383071 systemd-networkd[1044]: cilium_host: Gained carrier Feb 9 10:06:15.452064 systemd-networkd[1044]: cilium_vxlan: Link UP Feb 9 10:06:15.452070 systemd-networkd[1044]: cilium_vxlan: Gained carrier Feb 9 10:06:15.591352 systemd-networkd[1044]: cilium_host: Gained IPv6LL Feb 9 10:06:15.607340 systemd-networkd[1044]: cilium_net: Gained IPv6LL Feb 9 10:06:15.726005 kernel: NET: Registered PF_ALG protocol family Feb 9 10:06:15.996080 kubelet[1408]: E0209 10:06:15.996044 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:16.205046 kubelet[1408]: E0209 10:06:16.205016 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:16.262008 systemd-networkd[1044]: lxc_health: Link UP Feb 9 10:06:16.270605 systemd-networkd[1044]: lxc_health: Gained carrier Feb 9 10:06:16.271038 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:06:16.619382 systemd-networkd[1044]: lxceee51d4bd00a: Link UP Feb 9 10:06:16.628009 kernel: eth0: renamed from tmp0fdad Feb 9 10:06:16.635039 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:06:16.635117 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceee51d4bd00a: link becomes ready Feb 9 10:06:16.635130 systemd-networkd[1044]: lxceee51d4bd00a: Gained carrier Feb 9 10:06:16.711417 systemd-networkd[1044]: cilium_vxlan: Gained IPv6LL Feb 9 10:06:16.996720 kubelet[1408]: E0209 10:06:16.996613 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:17.312270 kubelet[1408]: E0209 10:06:17.312182 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:17.479431 systemd-networkd[1044]: lxc_health: Gained IPv6LL Feb 9 10:06:17.799358 systemd-networkd[1044]: lxceee51d4bd00a: Gained IPv6LL Feb 9 10:06:17.996857 kubelet[1408]: E0209 10:06:17.996806 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:18.207616 kubelet[1408]: E0209 10:06:18.207526 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:18.997666 kubelet[1408]: E0209 10:06:18.997625 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:19.208371 kubelet[1408]: E0209 10:06:19.208339 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:19.998100 kubelet[1408]: E0209 10:06:19.998055 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:20.062808 env[1141]: time="2024-02-09T10:06:20.062740361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:20.063165 env[1141]: time="2024-02-09T10:06:20.062786247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:20.063165 env[1141]: time="2024-02-09T10:06:20.062796431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:20.063165 env[1141]: time="2024-02-09T10:06:20.062919913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fdad83672c4fa967118576faac193e7b68a14043e96be6a29460cf62f9b3eb0 pid=2466 runtime=io.containerd.runc.v2 Feb 9 10:06:20.074204 systemd[1]: run-containerd-runc-k8s.io-0fdad83672c4fa967118576faac193e7b68a14043e96be6a29460cf62f9b3eb0-runc.7ouvIw.mount: Deactivated successfully. Feb 9 10:06:20.077040 systemd[1]: Started cri-containerd-0fdad83672c4fa967118576faac193e7b68a14043e96be6a29460cf62f9b3eb0.scope. Feb 9 10:06:20.130417 systemd-resolved[1089]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:06:20.146295 env[1141]: time="2024-02-09T10:06:20.146257834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-b7nbk,Uid:af2ce7c0-0840-44cf-8c5e-f1e1722e0d76,Namespace:default,Attempt:0,} returns sandbox id \"0fdad83672c4fa967118576faac193e7b68a14043e96be6a29460cf62f9b3eb0\"" Feb 9 10:06:20.148066 env[1141]: time="2024-02-09T10:06:20.148027876Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 10:06:20.987573 kubelet[1408]: E0209 10:06:20.987528 1408 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:20.998682 kubelet[1408]: E0209 10:06:20.998640 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:21.999047 kubelet[1408]: E0209 10:06:21.999005 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:22.802695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240265655.mount: Deactivated successfully. Feb 9 10:06:23.000128 kubelet[1408]: E0209 10:06:23.000094 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:23.517920 env[1141]: time="2024-02-09T10:06:23.517875435Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:23.519459 env[1141]: time="2024-02-09T10:06:23.519431525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:23.521036 env[1141]: time="2024-02-09T10:06:23.521009751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:23.523202 env[1141]: time="2024-02-09T10:06:23.523173388Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:23.523855 env[1141]: time="2024-02-09T10:06:23.523826128Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 10:06:23.525789 env[1141]: time="2024-02-09T10:06:23.525760971Z" level=info msg="CreateContainer within sandbox \"0fdad83672c4fa967118576faac193e7b68a14043e96be6a29460cf62f9b3eb0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 10:06:23.534201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172597508.mount: Deactivated successfully. Feb 9 10:06:23.537457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316703588.mount: Deactivated successfully. Feb 9 10:06:23.538687 env[1141]: time="2024-02-09T10:06:23.538651894Z" level=info msg="CreateContainer within sandbox \"0fdad83672c4fa967118576faac193e7b68a14043e96be6a29460cf62f9b3eb0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"535482eab3b489bc82f966a21f4016ee5dfc5c266a32ee8ac43b5ddc522a627a\"" Feb 9 10:06:23.539237 env[1141]: time="2024-02-09T10:06:23.539211733Z" level=info msg="StartContainer for \"535482eab3b489bc82f966a21f4016ee5dfc5c266a32ee8ac43b5ddc522a627a\"" Feb 9 10:06:23.552590 systemd[1]: Started cri-containerd-535482eab3b489bc82f966a21f4016ee5dfc5c266a32ee8ac43b5ddc522a627a.scope. Feb 9 10:06:23.585715 env[1141]: time="2024-02-09T10:06:23.585676339Z" level=info msg="StartContainer for \"535482eab3b489bc82f966a21f4016ee5dfc5c266a32ee8ac43b5ddc522a627a\" returns successfully" Feb 9 10:06:24.000715 kubelet[1408]: E0209 10:06:24.000164 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:24.225608 kubelet[1408]: I0209 10:06:24.225567 1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-b7nbk" podStartSLOduration=6.848965425 podCreationTimestamp="2024-02-09 10:06:14 +0000 UTC" firstStartedPulling="2024-02-09 10:06:20.147483908 +0000 UTC m=+19.709799797" lastFinishedPulling="2024-02-09 10:06:23.524049208 +0000 UTC m=+23.086365097" observedRunningTime="2024-02-09 10:06:24.225400247 +0000 UTC m=+23.787716136" watchObservedRunningTime="2024-02-09 10:06:24.225530725 +0000 UTC m=+23.787846614" Feb 9 10:06:25.001320 kubelet[1408]: E0209 10:06:25.001262 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:25.832590 kubelet[1408]: I0209 10:06:25.832538 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:25.837026 systemd[1]: Created slice kubepods-besteffort-pod5904b81c_154d_47df_a421_4014f47aced5.slice. Feb 9 10:06:26.002424 kubelet[1408]: E0209 10:06:26.002381 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:26.006724 kubelet[1408]: I0209 10:06:26.006622 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5904b81c-154d-47df-a421-4014f47aced5-data\") pod \"nfs-server-provisioner-0\" (UID: \"5904b81c-154d-47df-a421-4014f47aced5\") " pod="default/nfs-server-provisioner-0" Feb 9 10:06:26.006724 kubelet[1408]: I0209 10:06:26.006685 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc6gc\" (UniqueName: \"kubernetes.io/projected/5904b81c-154d-47df-a421-4014f47aced5-kube-api-access-wc6gc\") pod \"nfs-server-provisioner-0\" (UID: \"5904b81c-154d-47df-a421-4014f47aced5\") " pod="default/nfs-server-provisioner-0" Feb 9 10:06:26.139658 env[1141]: time="2024-02-09T10:06:26.139558564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5904b81c-154d-47df-a421-4014f47aced5,Namespace:default,Attempt:0,}" Feb 9 10:06:26.166071 systemd-networkd[1044]: lxcb4ad37f375a8: Link UP Feb 9 10:06:26.176011 kernel: eth0: renamed from tmp8c0f8 Feb 9 10:06:26.183418 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:06:26.183505 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb4ad37f375a8: link becomes ready Feb 9 10:06:26.183565 systemd-networkd[1044]: lxcb4ad37f375a8: Gained carrier Feb 9 10:06:26.372722 env[1141]: time="2024-02-09T10:06:26.372654888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:26.372722 env[1141]: time="2024-02-09T10:06:26.372694539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:26.372722 env[1141]: time="2024-02-09T10:06:26.372705212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:26.372917 env[1141]: time="2024-02-09T10:06:26.372827324Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c0f8dd4a69f4cecd56b83f87782742c005301c8c41e461e8321c409f8749b19 pid=2597 runtime=io.containerd.runc.v2 Feb 9 10:06:26.385398 systemd[1]: Started cri-containerd-8c0f8dd4a69f4cecd56b83f87782742c005301c8c41e461e8321c409f8749b19.scope. Feb 9 10:06:26.408714 systemd-resolved[1089]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:06:26.424104 env[1141]: time="2024-02-09T10:06:26.424063134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5904b81c-154d-47df-a421-4014f47aced5,Namespace:default,Attempt:0,} returns sandbox id \"8c0f8dd4a69f4cecd56b83f87782742c005301c8c41e461e8321c409f8749b19\"" Feb 9 10:06:26.425269 env[1141]: time="2024-02-09T10:06:26.425243566Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 10:06:27.002640 kubelet[1408]: E0209 10:06:27.002592 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:27.783125 systemd-networkd[1044]: lxcb4ad37f375a8: Gained IPv6LL Feb 9 10:06:28.003386 kubelet[1408]: E0209 10:06:28.003321 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:28.625230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966677912.mount: Deactivated successfully. Feb 9 10:06:29.003881 kubelet[1408]: E0209 10:06:29.003666 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:30.004729 kubelet[1408]: E0209 10:06:30.004696 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:30.383361 env[1141]: time="2024-02-09T10:06:30.383049110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:30.385205 env[1141]: time="2024-02-09T10:06:30.385167157Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:30.387050 env[1141]: time="2024-02-09T10:06:30.387021773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:30.389055 env[1141]: time="2024-02-09T10:06:30.389025549Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:30.390466 env[1141]: time="2024-02-09T10:06:30.390416162Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 10:06:30.392295 env[1141]: time="2024-02-09T10:06:30.392254774Z" level=info msg="CreateContainer within sandbox \"8c0f8dd4a69f4cecd56b83f87782742c005301c8c41e461e8321c409f8749b19\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 10:06:30.403276 env[1141]: time="2024-02-09T10:06:30.403232152Z" level=info msg="CreateContainer within sandbox \"8c0f8dd4a69f4cecd56b83f87782742c005301c8c41e461e8321c409f8749b19\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"cc5c06a218e315e74fb39c347efb904151c12ac32f3a72ba0135cbb1e7e8271d\"" Feb 9 10:06:30.403763 env[1141]: time="2024-02-09T10:06:30.403694396Z" level=info msg="StartContainer for \"cc5c06a218e315e74fb39c347efb904151c12ac32f3a72ba0135cbb1e7e8271d\"" Feb 9 10:06:30.421465 systemd[1]: run-containerd-runc-k8s.io-cc5c06a218e315e74fb39c347efb904151c12ac32f3a72ba0135cbb1e7e8271d-runc.AiIHIy.mount: Deactivated successfully. Feb 9 10:06:30.422874 systemd[1]: Started cri-containerd-cc5c06a218e315e74fb39c347efb904151c12ac32f3a72ba0135cbb1e7e8271d.scope. Feb 9 10:06:30.466927 env[1141]: time="2024-02-09T10:06:30.466875826Z" level=info msg="StartContainer for \"cc5c06a218e315e74fb39c347efb904151c12ac32f3a72ba0135cbb1e7e8271d\" returns successfully" Feb 9 10:06:31.006180 kubelet[1408]: E0209 10:06:31.006141 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:31.245274 kubelet[1408]: I0209 10:06:31.245227 1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.279559673 podCreationTimestamp="2024-02-09 10:06:25 +0000 UTC" firstStartedPulling="2024-02-09 10:06:26.425039832 +0000 UTC m=+25.987355721" lastFinishedPulling="2024-02-09 10:06:30.39067175 +0000 UTC m=+29.952987639" observedRunningTime="2024-02-09 10:06:31.245018747 +0000 UTC m=+30.807334636" watchObservedRunningTime="2024-02-09 10:06:31.245191591 +0000 UTC m=+30.807507480" Feb 9 10:06:32.007091 kubelet[1408]: E0209 10:06:32.007025 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:33.008018 kubelet[1408]: E0209 10:06:33.007974 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:34.008575 kubelet[1408]: E0209 10:06:34.008520 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:35.009176 kubelet[1408]: E0209 10:06:35.009119 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:36.009838 kubelet[1408]: E0209 10:06:36.009795 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:37.010433 kubelet[1408]: E0209 10:06:37.010382 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:38.010824 kubelet[1408]: E0209 10:06:38.010766 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:39.011391 kubelet[1408]: E0209 10:06:39.011355 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:40.012126 kubelet[1408]: E0209 10:06:40.012065 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:40.034134 update_engine[1132]: I0209 10:06:40.034077 1132 update_attempter.cc:509] Updating boot flags... Feb 9 10:06:40.364933 kubelet[1408]: I0209 10:06:40.364717 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:40.370525 systemd[1]: Created slice kubepods-besteffort-pod4252491d_98db_4653_b9dd_004954019c07.slice. Feb 9 10:06:40.473466 kubelet[1408]: I0209 10:06:40.473375 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9fb6b2bb-d2b3-4707-bc13-0e2cc9f87e8e\" (UniqueName: \"kubernetes.io/nfs/4252491d-98db-4653-b9dd-004954019c07-pvc-9fb6b2bb-d2b3-4707-bc13-0e2cc9f87e8e\") pod \"test-pod-1\" (UID: \"4252491d-98db-4653-b9dd-004954019c07\") " pod="default/test-pod-1" Feb 9 10:06:40.473466 kubelet[1408]: I0209 10:06:40.473429 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqtt\" (UniqueName: \"kubernetes.io/projected/4252491d-98db-4653-b9dd-004954019c07-kube-api-access-2lqtt\") pod \"test-pod-1\" (UID: \"4252491d-98db-4653-b9dd-004954019c07\") " pod="default/test-pod-1" Feb 9 10:06:40.598014 kernel: FS-Cache: Loaded Feb 9 10:06:40.621373 kernel: RPC: Registered named UNIX socket transport module. Feb 9 10:06:40.621464 kernel: RPC: Registered udp transport module. Feb 9 10:06:40.621492 kernel: RPC: Registered tcp transport module. Feb 9 10:06:40.621511 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 10:06:40.657006 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 10:06:40.785177 kernel: NFS: Registering the id_resolver key type Feb 9 10:06:40.785323 kernel: Key type id_resolver registered Feb 9 10:06:40.785366 kernel: Key type id_legacy registered Feb 9 10:06:40.809397 nfsidmap[2726]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 10:06:40.814315 nfsidmap[2729]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 10:06:40.974583 env[1141]: time="2024-02-09T10:06:40.974470574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4252491d-98db-4653-b9dd-004954019c07,Namespace:default,Attempt:0,}" Feb 9 10:06:40.987992 kubelet[1408]: E0209 10:06:40.987945 1408 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:41.003749 systemd-networkd[1044]: lxc11c4a0f53048: Link UP Feb 9 10:06:41.012015 kernel: eth0: renamed from tmpb2bdf Feb 9 10:06:41.012202 kubelet[1408]: E0209 10:06:41.012161 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:41.019600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 10:06:41.019679 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc11c4a0f53048: link becomes ready Feb 9 10:06:41.019729 systemd-networkd[1044]: lxc11c4a0f53048: Gained carrier Feb 9 10:06:41.247522 env[1141]: time="2024-02-09T10:06:41.247383401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:41.247522 env[1141]: time="2024-02-09T10:06:41.247426248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:41.247691 env[1141]: time="2024-02-09T10:06:41.247436849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:41.249786 env[1141]: time="2024-02-09T10:06:41.249691304Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2bdf6725ddc1aaa869f240840181f59087d7ce5d9e6f793582748150733929d pid=2762 runtime=io.containerd.runc.v2 Feb 9 10:06:41.263456 systemd[1]: Started cri-containerd-b2bdf6725ddc1aaa869f240840181f59087d7ce5d9e6f793582748150733929d.scope. Feb 9 10:06:41.290091 systemd-resolved[1089]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 10:06:41.307036 env[1141]: time="2024-02-09T10:06:41.306964173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4252491d-98db-4653-b9dd-004954019c07,Namespace:default,Attempt:0,} returns sandbox id \"b2bdf6725ddc1aaa869f240840181f59087d7ce5d9e6f793582748150733929d\"" Feb 9 10:06:41.308641 env[1141]: time="2024-02-09T10:06:41.308609977Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 10:06:41.626125 env[1141]: time="2024-02-09T10:06:41.626018251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:41.627955 env[1141]: time="2024-02-09T10:06:41.627917133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:41.630081 env[1141]: time="2024-02-09T10:06:41.630046290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:41.631746 env[1141]: time="2024-02-09T10:06:41.631713017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:41.632418 env[1141]: time="2024-02-09T10:06:41.632382117Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 10:06:41.634607 env[1141]: time="2024-02-09T10:06:41.634561841Z" level=info msg="CreateContainer within sandbox \"b2bdf6725ddc1aaa869f240840181f59087d7ce5d9e6f793582748150733929d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 10:06:41.644588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064894617.mount: Deactivated successfully. Feb 9 10:06:41.647959 env[1141]: time="2024-02-09T10:06:41.647928186Z" level=info msg="CreateContainer within sandbox \"b2bdf6725ddc1aaa869f240840181f59087d7ce5d9e6f793582748150733929d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"117f3bedea92c54244dec5f0a8501ee2d44fdc2bc0d3c9439031213c2ce9b942\"" Feb 9 10:06:41.648484 env[1141]: time="2024-02-09T10:06:41.648449704Z" level=info msg="StartContainer for \"117f3bedea92c54244dec5f0a8501ee2d44fdc2bc0d3c9439031213c2ce9b942\"" Feb 9 10:06:41.665431 systemd[1]: Started cri-containerd-117f3bedea92c54244dec5f0a8501ee2d44fdc2bc0d3c9439031213c2ce9b942.scope. Feb 9 10:06:41.699861 env[1141]: time="2024-02-09T10:06:41.699591701Z" level=info msg="StartContainer for \"117f3bedea92c54244dec5f0a8501ee2d44fdc2bc0d3c9439031213c2ce9b942\" returns successfully" Feb 9 10:06:42.012515 kubelet[1408]: E0209 10:06:42.012355 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:42.823168 systemd-networkd[1044]: lxc11c4a0f53048: Gained IPv6LL Feb 9 10:06:43.012534 kubelet[1408]: E0209 10:06:43.012496 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:44.012995 kubelet[1408]: E0209 10:06:44.012932 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:45.013353 kubelet[1408]: E0209 10:06:45.013304 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:46.014438 kubelet[1408]: E0209 10:06:46.014395 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:47.015511 kubelet[1408]: E0209 10:06:47.015439 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:48.016563 kubelet[1408]: E0209 10:06:48.016522 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:49.017221 kubelet[1408]: E0209 10:06:49.017154 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:49.024074 kubelet[1408]: I0209 10:06:49.024038 1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.698902334 podCreationTimestamp="2024-02-09 10:06:26 +0000 UTC" firstStartedPulling="2024-02-09 10:06:41.308177393 +0000 UTC m=+40.870493242" lastFinishedPulling="2024-02-09 10:06:41.63327733 +0000 UTC m=+41.195593219" observedRunningTime="2024-02-09 10:06:42.262405352 +0000 UTC m=+41.824721241" watchObservedRunningTime="2024-02-09 10:06:49.024002311 +0000 UTC m=+48.586318200" Feb 9 10:06:49.043280 systemd[1]: run-containerd-runc-k8s.io-139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada-runc.w4U3dO.mount: Deactivated successfully. Feb 9 10:06:49.068856 env[1141]: time="2024-02-09T10:06:49.068796924Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:06:49.074423 env[1141]: time="2024-02-09T10:06:49.074368853Z" level=info msg="StopContainer for \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\" with timeout 1 (s)" Feb 9 10:06:49.074669 env[1141]: time="2024-02-09T10:06:49.074635320Z" level=info msg="Stop container \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\" with signal terminated" Feb 9 10:06:49.080208 systemd-networkd[1044]: lxc_health: Link DOWN Feb 9 10:06:49.080214 systemd-networkd[1044]: lxc_health: Lost carrier Feb 9 10:06:49.122673 systemd[1]: cri-containerd-139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada.scope: Deactivated successfully. Feb 9 10:06:49.123738 systemd[1]: cri-containerd-139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada.scope: Consumed 6.433s CPU time. Feb 9 10:06:49.146203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada-rootfs.mount: Deactivated successfully. Feb 9 10:06:49.263792 env[1141]: time="2024-02-09T10:06:49.263745667Z" level=info msg="shim disconnected" id=139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada Feb 9 10:06:49.264054 env[1141]: time="2024-02-09T10:06:49.264034537Z" level=warning msg="cleaning up after shim disconnected" id=139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada namespace=k8s.io Feb 9 10:06:49.264140 env[1141]: time="2024-02-09T10:06:49.264127066Z" level=info msg="cleaning up dead shim" Feb 9 10:06:49.274447 env[1141]: time="2024-02-09T10:06:49.274366352Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2893 runtime=io.containerd.runc.v2\n" Feb 9 10:06:49.278416 env[1141]: time="2024-02-09T10:06:49.278383842Z" level=info msg="StopContainer for \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\" returns successfully" Feb 9 10:06:49.279121 env[1141]: time="2024-02-09T10:06:49.279099275Z" level=info msg="StopPodSandbox for \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\"" Feb 9 10:06:49.279271 env[1141]: time="2024-02-09T10:06:49.279250130Z" level=info msg="Container to stop \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.279370 env[1141]: time="2024-02-09T10:06:49.279352381Z" level=info msg="Container to stop \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.279454 env[1141]: time="2024-02-09T10:06:49.279437229Z" level=info msg="Container to stop \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.279523 env[1141]: time="2024-02-09T10:06:49.279507516Z" level=info msg="Container to stop \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.279582 env[1141]: time="2024-02-09T10:06:49.279566522Z" level=info msg="Container to stop \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:49.280945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c-shm.mount: Deactivated successfully. Feb 9 10:06:49.287946 systemd[1]: cri-containerd-8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c.scope: Deactivated successfully. Feb 9 10:06:49.313020 env[1141]: time="2024-02-09T10:06:49.312967893Z" level=info msg="shim disconnected" id=8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c Feb 9 10:06:49.313020 env[1141]: time="2024-02-09T10:06:49.313020218Z" level=warning msg="cleaning up after shim disconnected" id=8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c namespace=k8s.io Feb 9 10:06:49.313194 env[1141]: time="2024-02-09T10:06:49.313029739Z" level=info msg="cleaning up dead shim" Feb 9 10:06:49.322259 env[1141]: time="2024-02-09T10:06:49.322224598Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2925 runtime=io.containerd.runc.v2\n" Feb 9 10:06:49.322529 env[1141]: time="2024-02-09T10:06:49.322500626Z" level=info msg="TearDown network for sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" successfully" Feb 9 10:06:49.322572 env[1141]: time="2024-02-09T10:06:49.322528749Z" level=info msg="StopPodSandbox for \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" returns successfully" Feb 9 10:06:49.421941 kubelet[1408]: I0209 10:06:49.421892 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-cgroup\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.421941 kubelet[1408]: I0209 10:06:49.421937 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-xtables-lock\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422137 kubelet[1408]: I0209 10:06:49.421957 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-etc-cni-netd\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422137 kubelet[1408]: I0209 10:06:49.421974 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cni-path\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422137 kubelet[1408]: I0209 10:06:49.422025 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-config-path\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422137 kubelet[1408]: I0209 10:06:49.422044 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-net\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422137 kubelet[1408]: I0209 10:06:49.422066 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-hubble-tls\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422137 kubelet[1408]: I0209 10:06:49.422082 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-lib-modules\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422280 kubelet[1408]: I0209 10:06:49.422102 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ltm9\" (UniqueName: \"kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-kube-api-access-6ltm9\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422280 kubelet[1408]: I0209 10:06:49.422120 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-kernel\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422280 kubelet[1408]: I0209 10:06:49.422136 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-hostproc\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422280 kubelet[1408]: I0209 10:06:49.422156 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4086108e-9d13-42a8-91b1-64f9e50221a8-clustermesh-secrets\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422280 kubelet[1408]: I0209 10:06:49.422175 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-run\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422280 kubelet[1408]: I0209 10:06:49.422192 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-bpf-maps\") pod \"4086108e-9d13-42a8-91b1-64f9e50221a8\" (UID: \"4086108e-9d13-42a8-91b1-64f9e50221a8\") " Feb 9 10:06:49.422450 kubelet[1408]: I0209 10:06:49.422256 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.422450 kubelet[1408]: I0209 10:06:49.422290 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.422450 kubelet[1408]: I0209 10:06:49.422317 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.422450 kubelet[1408]: I0209 10:06:49.422333 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.422450 kubelet[1408]: I0209 10:06:49.422346 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cni-path" (OuterVolumeSpecName: "cni-path") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.422565 kubelet[1408]: W0209 10:06:49.422509 1408 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4086108e-9d13-42a8-91b1-64f9e50221a8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:06:49.424246 kubelet[1408]: I0209 10:06:49.424206 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:06:49.424302 kubelet[1408]: I0209 10:06:49.424270 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.424866 kubelet[1408]: I0209 10:06:49.424578 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.424866 kubelet[1408]: I0209 10:06:49.424606 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.424866 kubelet[1408]: I0209 10:06:49.424657 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-hostproc" (OuterVolumeSpecName: "hostproc") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.424866 kubelet[1408]: I0209 10:06:49.424676 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:49.429056 kubelet[1408]: I0209 10:06:49.429005 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4086108e-9d13-42a8-91b1-64f9e50221a8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:06:49.429056 kubelet[1408]: I0209 10:06:49.429044 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:49.429529 kubelet[1408]: I0209 10:06:49.429491 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-kube-api-access-6ltm9" (OuterVolumeSpecName: "kube-api-access-6ltm9") pod "4086108e-9d13-42a8-91b1-64f9e50221a8" (UID: "4086108e-9d13-42a8-91b1-64f9e50221a8"). InnerVolumeSpecName "kube-api-access-6ltm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.522948 1408 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4086108e-9d13-42a8-91b1-64f9e50221a8-clustermesh-secrets\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.522998 1408 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-run\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.523009 1408 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-bpf-maps\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.523018 1408 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-cgroup\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.523027 1408 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-xtables-lock\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.523037 1408 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-etc-cni-netd\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.523046 1408 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-cni-path\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523042 kubelet[1408]: I0209 10:06:49.523055 1408 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4086108e-9d13-42a8-91b1-64f9e50221a8-cilium-config-path\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523364 kubelet[1408]: I0209 10:06:49.523066 1408 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-hubble-tls\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523364 kubelet[1408]: I0209 10:06:49.523075 1408 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-lib-modules\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523364 kubelet[1408]: I0209 10:06:49.523087 1408 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6ltm9\" (UniqueName: \"kubernetes.io/projected/4086108e-9d13-42a8-91b1-64f9e50221a8-kube-api-access-6ltm9\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523364 kubelet[1408]: I0209 10:06:49.523097 1408 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-kernel\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523364 kubelet[1408]: I0209 10:06:49.523105 1408 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-hostproc\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:49.523364 kubelet[1408]: I0209 10:06:49.523114 1408 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4086108e-9d13-42a8-91b1-64f9e50221a8-host-proc-sys-net\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:50.017864 kubelet[1408]: E0209 10:06:50.017812 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:50.037886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c-rootfs.mount: Deactivated successfully. Feb 9 10:06:50.038006 systemd[1]: var-lib-kubelet-pods-4086108e\x2d9d13\x2d42a8\x2d91b1\x2d64f9e50221a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ltm9.mount: Deactivated successfully. Feb 9 10:06:50.038071 systemd[1]: var-lib-kubelet-pods-4086108e\x2d9d13\x2d42a8\x2d91b1\x2d64f9e50221a8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:06:50.038136 systemd[1]: var-lib-kubelet-pods-4086108e\x2d9d13\x2d42a8\x2d91b1\x2d64f9e50221a8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:06:50.276266 kubelet[1408]: I0209 10:06:50.276171 1408 scope.go:115] "RemoveContainer" containerID="139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada" Feb 9 10:06:50.279700 systemd[1]: Removed slice kubepods-burstable-pod4086108e_9d13_42a8_91b1_64f9e50221a8.slice. Feb 9 10:06:50.279781 systemd[1]: kubepods-burstable-pod4086108e_9d13_42a8_91b1_64f9e50221a8.slice: Consumed 6.644s CPU time. Feb 9 10:06:50.281220 env[1141]: time="2024-02-09T10:06:50.281175017Z" level=info msg="RemoveContainer for \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\"" Feb 9 10:06:50.283887 env[1141]: time="2024-02-09T10:06:50.283847598Z" level=info msg="RemoveContainer for \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\" returns successfully" Feb 9 10:06:50.284112 kubelet[1408]: I0209 10:06:50.284088 1408 scope.go:115] "RemoveContainer" containerID="065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396" Feb 9 10:06:50.285841 env[1141]: time="2024-02-09T10:06:50.285579647Z" level=info msg="RemoveContainer for \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\"" Feb 9 10:06:50.294156 env[1141]: time="2024-02-09T10:06:50.294004111Z" level=info msg="RemoveContainer for \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\" returns successfully" Feb 9 10:06:50.294270 kubelet[1408]: I0209 10:06:50.294226 1408 scope.go:115] "RemoveContainer" containerID="0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740" Feb 9 10:06:50.295745 env[1141]: time="2024-02-09T10:06:50.295490457Z" level=info msg="RemoveContainer for \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\"" Feb 9 10:06:50.298062 env[1141]: time="2024-02-09T10:06:50.297868449Z" level=info msg="RemoveContainer for \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\" returns successfully" Feb 9 10:06:50.298129 kubelet[1408]: I0209 10:06:50.298102 1408 scope.go:115] "RemoveContainer" containerID="7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38" Feb 9 10:06:50.299265 env[1141]: time="2024-02-09T10:06:50.299041804Z" level=info msg="RemoveContainer for \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\"" Feb 9 10:06:50.301412 env[1141]: time="2024-02-09T10:06:50.301302145Z" level=info msg="RemoveContainer for \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\" returns successfully" Feb 9 10:06:50.301474 kubelet[1408]: I0209 10:06:50.301447 1408 scope.go:115] "RemoveContainer" containerID="41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0" Feb 9 10:06:50.302727 env[1141]: time="2024-02-09T10:06:50.302364969Z" level=info msg="RemoveContainer for \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\"" Feb 9 10:06:50.304777 env[1141]: time="2024-02-09T10:06:50.304633791Z" level=info msg="RemoveContainer for \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\" returns successfully" Feb 9 10:06:50.304837 kubelet[1408]: I0209 10:06:50.304819 1408 scope.go:115] "RemoveContainer" containerID="139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada" Feb 9 10:06:50.305155 env[1141]: time="2024-02-09T10:06:50.305021389Z" level=error msg="ContainerStatus for \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\": not found" Feb 9 10:06:50.305224 kubelet[1408]: E0209 10:06:50.305191 1408 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\": not found" containerID="139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada" Feb 9 10:06:50.305257 kubelet[1408]: I0209 10:06:50.305224 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada} err="failed to get container status \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\": rpc error: code = NotFound desc = an error occurred when try to find container \"139608a28fcedf0ce2e3b4b84faaded8e373a80b589961befd6d56b66ddc5ada\": not found" Feb 9 10:06:50.305257 kubelet[1408]: I0209 10:06:50.305234 1408 scope.go:115] "RemoveContainer" containerID="065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396" Feb 9 10:06:50.305561 env[1141]: time="2024-02-09T10:06:50.305438950Z" level=error msg="ContainerStatus for \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\": not found" Feb 9 10:06:50.305628 kubelet[1408]: E0209 10:06:50.305568 1408 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\": not found" containerID="065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396" Feb 9 10:06:50.305628 kubelet[1408]: I0209 10:06:50.305590 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396} err="failed to get container status \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\": rpc error: code = NotFound desc = an error occurred when try to find container \"065f3f507869394833932547b1a212d93e85fc6a905336fa8a98145204149396\": not found" Feb 9 10:06:50.305628 kubelet[1408]: I0209 10:06:50.305598 1408 scope.go:115] "RemoveContainer" containerID="0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740" Feb 9 10:06:50.305953 kubelet[1408]: E0209 10:06:50.305857 1408 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\": not found" containerID="0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740" Feb 9 10:06:50.305953 kubelet[1408]: I0209 10:06:50.305875 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740} err="failed to get container status \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\": not found" Feb 9 10:06:50.305953 kubelet[1408]: I0209 10:06:50.305883 1408 scope.go:115] "RemoveContainer" containerID="7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38" Feb 9 10:06:50.306062 env[1141]: time="2024-02-09T10:06:50.305756861Z" level=error msg="ContainerStatus for \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b9be440a4e372b8135273eac718612b03f77526459b22fcdf6fad493b415740\": not found" Feb 9 10:06:50.306367 env[1141]: time="2024-02-09T10:06:50.306233187Z" level=error msg="ContainerStatus for \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\": not found" Feb 9 10:06:50.306435 kubelet[1408]: E0209 10:06:50.306376 1408 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\": not found" containerID="7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38" Feb 9 10:06:50.306435 kubelet[1408]: I0209 10:06:50.306402 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38} err="failed to get container status \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c19caa89188052a09563e7191266a1cff98d86f56f569f7f67441e5ca6c7b38\": not found" Feb 9 10:06:50.306435 kubelet[1408]: I0209 10:06:50.306410 1408 scope.go:115] "RemoveContainer" containerID="41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0" Feb 9 10:06:50.306747 env[1141]: time="2024-02-09T10:06:50.306632506Z" level=error msg="ContainerStatus for \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\": not found" Feb 9 10:06:50.306812 kubelet[1408]: E0209 10:06:50.306749 1408 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\": not found" containerID="41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0" Feb 9 10:06:50.306812 kubelet[1408]: I0209 10:06:50.306766 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0} err="failed to get container status \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"41e5b85ae5f4f8b3075beebe13fc89df8e246db44d3f16c2670f007968b440b0\": not found" Feb 9 10:06:51.018149 kubelet[1408]: E0209 10:06:51.018091 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:51.157580 kubelet[1408]: E0209 10:06:51.157535 1408 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:06:51.175406 kubelet[1408]: I0209 10:06:51.174930 1408 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4086108e-9d13-42a8-91b1-64f9e50221a8 path="/var/lib/kubelet/pods/4086108e-9d13-42a8-91b1-64f9e50221a8/volumes" Feb 9 10:06:52.019088 kubelet[1408]: E0209 10:06:52.019047 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:52.946557 kubelet[1408]: I0209 10:06:52.946516 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:52.946557 kubelet[1408]: E0209 10:06:52.946568 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4086108e-9d13-42a8-91b1-64f9e50221a8" containerName="mount-cgroup" Feb 9 10:06:52.946755 kubelet[1408]: E0209 10:06:52.946578 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4086108e-9d13-42a8-91b1-64f9e50221a8" containerName="apply-sysctl-overwrites" Feb 9 10:06:52.946755 kubelet[1408]: E0209 10:06:52.946584 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4086108e-9d13-42a8-91b1-64f9e50221a8" containerName="mount-bpf-fs" Feb 9 10:06:52.946755 kubelet[1408]: E0209 10:06:52.946594 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4086108e-9d13-42a8-91b1-64f9e50221a8" containerName="clean-cilium-state" Feb 9 10:06:52.946755 kubelet[1408]: E0209 10:06:52.946600 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4086108e-9d13-42a8-91b1-64f9e50221a8" containerName="cilium-agent" Feb 9 10:06:52.946755 kubelet[1408]: I0209 10:06:52.946628 1408 memory_manager.go:346] "RemoveStaleState removing state" podUID="4086108e-9d13-42a8-91b1-64f9e50221a8" containerName="cilium-agent" Feb 9 10:06:52.952452 systemd[1]: Created slice kubepods-burstable-pod71c8e7bc_2232_416a_8b88_cdd8fedd1616.slice. Feb 9 10:06:52.955855 kubelet[1408]: I0209 10:06:52.955824 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:52.972375 systemd[1]: Created slice kubepods-besteffort-pod7e688848_23bc_4358_b19f_b7c059bfd125.slice. Feb 9 10:06:53.020039 kubelet[1408]: E0209 10:06:53.019958 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:53.042314 kubelet[1408]: I0209 10:06:53.042275 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cni-path\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042314 kubelet[1408]: I0209 10:06:53.042317 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-lib-modules\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042426 kubelet[1408]: I0209 10:06:53.042338 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-xtables-lock\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042426 kubelet[1408]: I0209 10:06:53.042359 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hostproc\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042426 kubelet[1408]: I0209 10:06:53.042376 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-cgroup\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042426 kubelet[1408]: I0209 10:06:53.042395 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-clustermesh-secrets\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042426 kubelet[1408]: I0209 10:06:53.042413 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hubble-tls\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042542 kubelet[1408]: I0209 10:06:53.042430 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-run\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042542 kubelet[1408]: I0209 10:06:53.042458 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-bpf-maps\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042542 kubelet[1408]: I0209 10:06:53.042478 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-etc-cni-netd\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042542 kubelet[1408]: I0209 10:06:53.042504 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-config-path\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042542 kubelet[1408]: I0209 10:06:53.042524 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-ipsec-secrets\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042542 kubelet[1408]: I0209 10:06:53.042542 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92hsm\" (UniqueName: \"kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-kube-api-access-92hsm\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042684 kubelet[1408]: I0209 10:06:53.042561 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-net\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.042684 kubelet[1408]: I0209 10:06:53.042580 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-kernel\") pod \"cilium-pgddp\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " pod="kube-system/cilium-pgddp" Feb 9 10:06:53.143279 kubelet[1408]: I0209 10:06:53.143243 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcg5c\" (UniqueName: \"kubernetes.io/projected/7e688848-23bc-4358-b19f-b7c059bfd125-kube-api-access-kcg5c\") pod \"cilium-operator-574c4bb98d-cs6v5\" (UID: \"7e688848-23bc-4358-b19f-b7c059bfd125\") " pod="kube-system/cilium-operator-574c4bb98d-cs6v5" Feb 9 10:06:53.143527 kubelet[1408]: I0209 10:06:53.143513 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e688848-23bc-4358-b19f-b7c059bfd125-cilium-config-path\") pod \"cilium-operator-574c4bb98d-cs6v5\" (UID: \"7e688848-23bc-4358-b19f-b7c059bfd125\") " pod="kube-system/cilium-operator-574c4bb98d-cs6v5" Feb 9 10:06:53.270275 kubelet[1408]: E0209 10:06:53.270174 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:53.271224 env[1141]: time="2024-02-09T10:06:53.270657623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgddp,Uid:71c8e7bc-2232-416a-8b88-cdd8fedd1616,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:53.274652 kubelet[1408]: E0209 10:06:53.274625 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:53.275073 env[1141]: time="2024-02-09T10:06:53.275013199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-cs6v5,Uid:7e688848-23bc-4358-b19f-b7c059bfd125,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:53.282424 env[1141]: time="2024-02-09T10:06:53.282334592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:53.282424 env[1141]: time="2024-02-09T10:06:53.282374996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:53.282424 env[1141]: time="2024-02-09T10:06:53.282385517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:53.282962 env[1141]: time="2024-02-09T10:06:53.282805793Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96 pid=2955 runtime=io.containerd.runc.v2 Feb 9 10:06:53.289254 env[1141]: time="2024-02-09T10:06:53.289135540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:53.289254 env[1141]: time="2024-02-09T10:06:53.289170103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:53.289254 env[1141]: time="2024-02-09T10:06:53.289186784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:53.290242 env[1141]: time="2024-02-09T10:06:53.289452087Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08c35d57c7bb3822b3b8e8178aa9357fa38cb3aead113b7f54fb8147cc9ca21f pid=2974 runtime=io.containerd.runc.v2 Feb 9 10:06:53.295900 systemd[1]: Started cri-containerd-0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96.scope. Feb 9 10:06:53.303675 systemd[1]: Started cri-containerd-08c35d57c7bb3822b3b8e8178aa9357fa38cb3aead113b7f54fb8147cc9ca21f.scope. Feb 9 10:06:53.349848 env[1141]: time="2024-02-09T10:06:53.349810946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgddp,Uid:71c8e7bc-2232-416a-8b88-cdd8fedd1616,Namespace:kube-system,Attempt:0,} returns sandbox id \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\"" Feb 9 10:06:53.351095 kubelet[1408]: E0209 10:06:53.350510 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:53.352810 env[1141]: time="2024-02-09T10:06:53.352764641Z" level=info msg="CreateContainer within sandbox \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:06:53.359348 env[1141]: time="2024-02-09T10:06:53.359313487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-cs6v5,Uid:7e688848-23bc-4358-b19f-b7c059bfd125,Namespace:kube-system,Attempt:0,} returns sandbox id \"08c35d57c7bb3822b3b8e8178aa9357fa38cb3aead113b7f54fb8147cc9ca21f\"" Feb 9 10:06:53.360561 kubelet[1408]: E0209 10:06:53.360142 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:53.361166 env[1141]: time="2024-02-09T10:06:53.361120283Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 10:06:53.364099 env[1141]: time="2024-02-09T10:06:53.364063898Z" level=info msg="CreateContainer within sandbox \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\"" Feb 9 10:06:53.364573 env[1141]: time="2024-02-09T10:06:53.364541579Z" level=info msg="StartContainer for \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\"" Feb 9 10:06:53.378169 systemd[1]: Started cri-containerd-7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d.scope. Feb 9 10:06:53.399847 systemd[1]: cri-containerd-7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d.scope: Deactivated successfully. Feb 9 10:06:53.412997 env[1141]: time="2024-02-09T10:06:53.412936643Z" level=info msg="shim disconnected" id=7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d Feb 9 10:06:53.412997 env[1141]: time="2024-02-09T10:06:53.412997608Z" level=warning msg="cleaning up after shim disconnected" id=7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d namespace=k8s.io Feb 9 10:06:53.413210 env[1141]: time="2024-02-09T10:06:53.413007489Z" level=info msg="cleaning up dead shim" Feb 9 10:06:53.419560 env[1141]: time="2024-02-09T10:06:53.419519492Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3053 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T10:06:53Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 10:06:53.419871 env[1141]: time="2024-02-09T10:06:53.419761033Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Feb 9 10:06:53.420080 env[1141]: time="2024-02-09T10:06:53.420038137Z" level=error msg="Failed to pipe stderr of container \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\"" error="reading from a closed fifo" Feb 9 10:06:53.422101 env[1141]: time="2024-02-09T10:06:53.422066192Z" level=error msg="Failed to pipe stdout of container \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\"" error="reading from a closed fifo" Feb 9 10:06:53.423930 env[1141]: time="2024-02-09T10:06:53.423868508Z" level=error msg="StartContainer for \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 10:06:53.424149 kubelet[1408]: E0209 10:06:53.424119 1408 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d" Feb 9 10:06:53.424274 kubelet[1408]: E0209 10:06:53.424236 1408 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 10:06:53.424274 kubelet[1408]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 10:06:53.424274 kubelet[1408]: rm /hostbin/cilium-mount Feb 9 10:06:53.424345 kubelet[1408]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-92hsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-pgddp_kube-system(71c8e7bc-2232-416a-8b88-cdd8fedd1616): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 10:06:53.424345 kubelet[1408]: E0209 10:06:53.424300 1408 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pgddp" podUID=71c8e7bc-2232-416a-8b88-cdd8fedd1616 Feb 9 10:06:53.849961 kubelet[1408]: I0209 10:06:53.849933 1408 setters.go:548] "Node became not ready" node="10.0.0.123" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:06:53.849892498 +0000 UTC m=+53.412208387 LastTransitionTime:2024-02-09 10:06:53.849892498 +0000 UTC m=+53.412208387 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:06:54.020993 kubelet[1408]: E0209 10:06:54.020928 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:54.284651 env[1141]: time="2024-02-09T10:06:54.284597299Z" level=info msg="StopPodSandbox for \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\"" Feb 9 10:06:54.285015 env[1141]: time="2024-02-09T10:06:54.284660464Z" level=info msg="Container to stop \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:06:54.286101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96-shm.mount: Deactivated successfully. Feb 9 10:06:54.290949 systemd[1]: cri-containerd-0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96.scope: Deactivated successfully. Feb 9 10:06:54.312229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96-rootfs.mount: Deactivated successfully. Feb 9 10:06:54.343139 env[1141]: time="2024-02-09T10:06:54.343085441Z" level=info msg="shim disconnected" id=0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96 Feb 9 10:06:54.343354 env[1141]: time="2024-02-09T10:06:54.343336982Z" level=warning msg="cleaning up after shim disconnected" id=0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96 namespace=k8s.io Feb 9 10:06:54.343412 env[1141]: time="2024-02-09T10:06:54.343399267Z" level=info msg="cleaning up dead shim" Feb 9 10:06:54.350090 env[1141]: time="2024-02-09T10:06:54.350055221Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3084 runtime=io.containerd.runc.v2\n" Feb 9 10:06:54.350502 env[1141]: time="2024-02-09T10:06:54.350473295Z" level=info msg="TearDown network for sandbox \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" successfully" Feb 9 10:06:54.350590 env[1141]: time="2024-02-09T10:06:54.350572384Z" level=info msg="StopPodSandbox for \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" returns successfully" Feb 9 10:06:54.451473 kubelet[1408]: I0209 10:06:54.451412 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.451473 kubelet[1408]: I0209 10:06:54.451467 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-kernel\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451525 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-lib-modules\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451546 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hostproc\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451571 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-clustermesh-secrets\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451585 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451594 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hubble-tls\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451605 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hostproc" (OuterVolumeSpecName: "hostproc") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451614 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-ipsec-secrets\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451632 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cni-path\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451649 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-xtables-lock\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451664 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-bpf-maps\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451675 kubelet[1408]: I0209 10:06:54.451682 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-net\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451703 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-cgroup\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451720 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-run\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451747 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92hsm\" (UniqueName: \"kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-kube-api-access-92hsm\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451771 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-etc-cni-netd\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451793 1408 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-config-path\") pod \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\" (UID: \"71c8e7bc-2232-416a-8b88-cdd8fedd1616\") " Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451825 1408 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-kernel\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451836 1408 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-lib-modules\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451846 1408 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hostproc\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.451946 kubelet[1408]: I0209 10:06:54.451896 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.452566 kubelet[1408]: W0209 10:06:54.452006 1408 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/71c8e7bc-2232-416a-8b88-cdd8fedd1616/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:06:54.452566 kubelet[1408]: I0209 10:06:54.452035 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.452566 kubelet[1408]: I0209 10:06:54.452070 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.452566 kubelet[1408]: I0209 10:06:54.452087 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.452566 kubelet[1408]: I0209 10:06:54.452108 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cni-path" (OuterVolumeSpecName: "cni-path") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.452566 kubelet[1408]: I0209 10:06:54.452178 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.452566 kubelet[1408]: I0209 10:06:54.452307 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:06:54.453798 kubelet[1408]: I0209 10:06:54.453763 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:06:54.457060 kubelet[1408]: I0209 10:06:54.456760 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:54.457174 systemd[1]: var-lib-kubelet-pods-71c8e7bc\x2d2232\x2d416a\x2d8b88\x2dcdd8fedd1616-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:06:54.459370 systemd[1]: var-lib-kubelet-pods-71c8e7bc\x2d2232\x2d416a\x2d8b88\x2dcdd8fedd1616-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d92hsm.mount: Deactivated successfully. Feb 9 10:06:54.460054 kubelet[1408]: I0209 10:06:54.460020 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-kube-api-access-92hsm" (OuterVolumeSpecName: "kube-api-access-92hsm") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "kube-api-access-92hsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:06:54.460355 kubelet[1408]: I0209 10:06:54.460319 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:06:54.460450 kubelet[1408]: I0209 10:06:54.460433 1408 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "71c8e7bc-2232-416a-8b88-cdd8fedd1616" (UID: "71c8e7bc-2232-416a-8b88-cdd8fedd1616"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.552888 1408 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-clustermesh-secrets\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.552923 1408 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-hubble-tls\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.552937 1408 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-ipsec-secrets\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.552947 1408 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-host-proc-sys-net\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.552959 1408 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cni-path\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.552968 1408 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-xtables-lock\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.552990 1408 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-bpf-maps\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.553017 kubelet[1408]: I0209 10:06:54.553000 1408 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-cgroup\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.555025 kubelet[1408]: I0209 10:06:54.554996 1408 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-run\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.555068 kubelet[1408]: I0209 10:06:54.555030 1408 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-92hsm\" (UniqueName: \"kubernetes.io/projected/71c8e7bc-2232-416a-8b88-cdd8fedd1616-kube-api-access-92hsm\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.555068 kubelet[1408]: I0209 10:06:54.555041 1408 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71c8e7bc-2232-416a-8b88-cdd8fedd1616-etc-cni-netd\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.555068 kubelet[1408]: I0209 10:06:54.555052 1408 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71c8e7bc-2232-416a-8b88-cdd8fedd1616-cilium-config-path\") on node \"10.0.0.123\" DevicePath \"\"" Feb 9 10:06:54.578037 env[1141]: time="2024-02-09T10:06:54.577961887Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:54.578993 env[1141]: time="2024-02-09T10:06:54.578952769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:54.580398 env[1141]: time="2024-02-09T10:06:54.580358126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 10:06:54.580891 env[1141]: time="2024-02-09T10:06:54.580860568Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 10:06:54.582459 env[1141]: time="2024-02-09T10:06:54.582427418Z" level=info msg="CreateContainer within sandbox \"08c35d57c7bb3822b3b8e8178aa9357fa38cb3aead113b7f54fb8147cc9ca21f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 10:06:54.590222 env[1141]: time="2024-02-09T10:06:54.590189303Z" level=info msg="CreateContainer within sandbox \"08c35d57c7bb3822b3b8e8178aa9357fa38cb3aead113b7f54fb8147cc9ca21f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a746349c1e9d8bbde5816882137439a7acf5542caaaf54f1a70bc4077f61d493\"" Feb 9 10:06:54.590929 env[1141]: time="2024-02-09T10:06:54.590819836Z" level=info msg="StartContainer for \"a746349c1e9d8bbde5816882137439a7acf5542caaaf54f1a70bc4077f61d493\"" Feb 9 10:06:54.603895 systemd[1]: Started cri-containerd-a746349c1e9d8bbde5816882137439a7acf5542caaaf54f1a70bc4077f61d493.scope. Feb 9 10:06:54.681665 env[1141]: time="2024-02-09T10:06:54.681599663Z" level=info msg="StartContainer for \"a746349c1e9d8bbde5816882137439a7acf5542caaaf54f1a70bc4077f61d493\" returns successfully" Feb 9 10:06:55.021459 kubelet[1408]: E0209 10:06:55.021413 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:55.147768 systemd[1]: var-lib-kubelet-pods-71c8e7bc\x2d2232\x2d416a\x2d8b88\x2dcdd8fedd1616-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 10:06:55.147865 systemd[1]: var-lib-kubelet-pods-71c8e7bc\x2d2232\x2d416a\x2d8b88\x2dcdd8fedd1616-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:06:55.178749 systemd[1]: Removed slice kubepods-burstable-pod71c8e7bc_2232_416a_8b88_cdd8fedd1616.slice. Feb 9 10:06:55.289259 kubelet[1408]: I0209 10:06:55.288798 1408 scope.go:115] "RemoveContainer" containerID="7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d" Feb 9 10:06:55.290432 env[1141]: time="2024-02-09T10:06:55.290396432Z" level=info msg="RemoveContainer for \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\"" Feb 9 10:06:55.291532 kubelet[1408]: E0209 10:06:55.291509 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:55.294041 env[1141]: time="2024-02-09T10:06:55.294006801Z" level=info msg="RemoveContainer for \"7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d\" returns successfully" Feb 9 10:06:55.319036 kubelet[1408]: I0209 10:06:55.318660 1408 topology_manager.go:212] "Topology Admit Handler" Feb 9 10:06:55.319036 kubelet[1408]: E0209 10:06:55.318716 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71c8e7bc-2232-416a-8b88-cdd8fedd1616" containerName="mount-cgroup" Feb 9 10:06:55.319036 kubelet[1408]: I0209 10:06:55.318737 1408 memory_manager.go:346] "RemoveStaleState removing state" podUID="71c8e7bc-2232-416a-8b88-cdd8fedd1616" containerName="mount-cgroup" Feb 9 10:06:55.324142 systemd[1]: Created slice kubepods-burstable-pod1afb8956_cbbb_47b6_bde0_59836d8689fa.slice. Feb 9 10:06:55.334396 kubelet[1408]: I0209 10:06:55.334361 1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-cs6v5" podStartSLOduration=2.114036816 podCreationTimestamp="2024-02-09 10:06:52 +0000 UTC" firstStartedPulling="2024-02-09 10:06:53.360829098 +0000 UTC m=+52.923144987" lastFinishedPulling="2024-02-09 10:06:54.581116269 +0000 UTC m=+54.143432118" observedRunningTime="2024-02-09 10:06:55.320846069 +0000 UTC m=+54.883161918" watchObservedRunningTime="2024-02-09 10:06:55.334323947 +0000 UTC m=+54.896639836" Feb 9 10:06:55.459357 kubelet[1408]: I0209 10:06:55.459323 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-lib-modules\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459476 kubelet[1408]: I0209 10:06:55.459389 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1afb8956-cbbb-47b6-bde0-59836d8689fa-cilium-ipsec-secrets\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459476 kubelet[1408]: I0209 10:06:55.459426 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1afb8956-cbbb-47b6-bde0-59836d8689fa-hubble-tls\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459476 kubelet[1408]: I0209 10:06:55.459444 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-bpf-maps\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459476 kubelet[1408]: I0209 10:06:55.459463 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-xtables-lock\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459601 kubelet[1408]: I0209 10:06:55.459487 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1afb8956-cbbb-47b6-bde0-59836d8689fa-clustermesh-secrets\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459601 kubelet[1408]: I0209 10:06:55.459510 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1afb8956-cbbb-47b6-bde0-59836d8689fa-cilium-config-path\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459601 kubelet[1408]: I0209 10:06:55.459529 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-host-proc-sys-net\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459601 kubelet[1408]: I0209 10:06:55.459551 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-etc-cni-netd\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459601 kubelet[1408]: I0209 10:06:55.459580 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-cni-path\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459601 kubelet[1408]: I0209 10:06:55.459601 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-host-proc-sys-kernel\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459738 kubelet[1408]: I0209 10:06:55.459620 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-cilium-run\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459738 kubelet[1408]: I0209 10:06:55.459639 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-hostproc\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459738 kubelet[1408]: I0209 10:06:55.459658 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1afb8956-cbbb-47b6-bde0-59836d8689fa-cilium-cgroup\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.459738 kubelet[1408]: I0209 10:06:55.459676 1408 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdj5x\" (UniqueName: \"kubernetes.io/projected/1afb8956-cbbb-47b6-bde0-59836d8689fa-kube-api-access-sdj5x\") pod \"cilium-h9s5v\" (UID: \"1afb8956-cbbb-47b6-bde0-59836d8689fa\") " pod="kube-system/cilium-h9s5v" Feb 9 10:06:55.635928 kubelet[1408]: E0209 10:06:55.635891 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:55.636485 env[1141]: time="2024-02-09T10:06:55.636442643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9s5v,Uid:1afb8956-cbbb-47b6-bde0-59836d8689fa,Namespace:kube-system,Attempt:0,}" Feb 9 10:06:55.647337 env[1141]: time="2024-02-09T10:06:55.647273669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:06:55.647452 env[1141]: time="2024-02-09T10:06:55.647331634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:06:55.647452 env[1141]: time="2024-02-09T10:06:55.647350635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:06:55.647516 env[1141]: time="2024-02-09T10:06:55.647482046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6 pid=3152 runtime=io.containerd.runc.v2 Feb 9 10:06:55.656716 systemd[1]: Started cri-containerd-193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6.scope. Feb 9 10:06:55.687359 env[1141]: time="2024-02-09T10:06:55.687306193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9s5v,Uid:1afb8956-cbbb-47b6-bde0-59836d8689fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\"" Feb 9 10:06:55.687962 kubelet[1408]: E0209 10:06:55.687944 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:55.689970 env[1141]: time="2024-02-09T10:06:55.689936163Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:06:55.699875 env[1141]: time="2024-02-09T10:06:55.699823514Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc\"" Feb 9 10:06:55.700405 env[1141]: time="2024-02-09T10:06:55.700373118Z" level=info msg="StartContainer for \"13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc\"" Feb 9 10:06:55.713116 systemd[1]: Started cri-containerd-13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc.scope. Feb 9 10:06:55.746196 env[1141]: time="2024-02-09T10:06:55.746138540Z" level=info msg="StartContainer for \"13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc\" returns successfully" Feb 9 10:06:55.751928 systemd[1]: cri-containerd-13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc.scope: Deactivated successfully. Feb 9 10:06:55.769103 env[1141]: time="2024-02-09T10:06:55.769058214Z" level=info msg="shim disconnected" id=13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc Feb 9 10:06:55.769103 env[1141]: time="2024-02-09T10:06:55.769103898Z" level=warning msg="cleaning up after shim disconnected" id=13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc namespace=k8s.io Feb 9 10:06:55.769282 env[1141]: time="2024-02-09T10:06:55.769112979Z" level=info msg="cleaning up dead shim" Feb 9 10:06:55.775576 env[1141]: time="2024-02-09T10:06:55.775532172Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3234 runtime=io.containerd.runc.v2\n" Feb 9 10:06:56.022063 kubelet[1408]: E0209 10:06:56.021942 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:56.158573 kubelet[1408]: E0209 10:06:56.158551 1408 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:06:56.295463 kubelet[1408]: E0209 10:06:56.295367 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:56.295682 kubelet[1408]: E0209 10:06:56.295667 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:56.297449 env[1141]: time="2024-02-09T10:06:56.297402125Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:06:56.307017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268464028.mount: Deactivated successfully. Feb 9 10:06:56.308350 env[1141]: time="2024-02-09T10:06:56.308308366Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332\"" Feb 9 10:06:56.310027 env[1141]: time="2024-02-09T10:06:56.309974454Z" level=info msg="StartContainer for \"1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332\"" Feb 9 10:06:56.325280 systemd[1]: Started cri-containerd-1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332.scope. Feb 9 10:06:56.357162 env[1141]: time="2024-02-09T10:06:56.357119809Z" level=info msg="StartContainer for \"1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332\" returns successfully" Feb 9 10:06:56.360808 systemd[1]: cri-containerd-1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332.scope: Deactivated successfully. Feb 9 10:06:56.379360 env[1141]: time="2024-02-09T10:06:56.379311840Z" level=info msg="shim disconnected" id=1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332 Feb 9 10:06:56.379360 env[1141]: time="2024-02-09T10:06:56.379354204Z" level=warning msg="cleaning up after shim disconnected" id=1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332 namespace=k8s.io Feb 9 10:06:56.379360 env[1141]: time="2024-02-09T10:06:56.379364844Z" level=info msg="cleaning up dead shim" Feb 9 10:06:56.386685 env[1141]: time="2024-02-09T10:06:56.386647926Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3296 runtime=io.containerd.runc.v2\n" Feb 9 10:06:56.518205 kubelet[1408]: W0209 10:06:56.518162 1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c8e7bc_2232_416a_8b88_cdd8fedd1616.slice/cri-containerd-7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d.scope WatchSource:0}: container "7a4119d539ce2f406a52a1bf2ce04ef407bbf236d72515f9b46a1e6ba26c9a6d" in namespace "k8s.io": not found Feb 9 10:06:57.022934 kubelet[1408]: E0209 10:06:57.022906 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:57.147426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332-rootfs.mount: Deactivated successfully. Feb 9 10:06:57.175706 kubelet[1408]: I0209 10:06:57.175668 1408 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=71c8e7bc-2232-416a-8b88-cdd8fedd1616 path="/var/lib/kubelet/pods/71c8e7bc-2232-416a-8b88-cdd8fedd1616/volumes" Feb 9 10:06:57.297994 kubelet[1408]: E0209 10:06:57.297895 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:57.299697 env[1141]: time="2024-02-09T10:06:57.299654182Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:06:57.311029 env[1141]: time="2024-02-09T10:06:57.310972183Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace\"" Feb 9 10:06:57.311767 env[1141]: time="2024-02-09T10:06:57.311743681Z" level=info msg="StartContainer for \"cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace\"" Feb 9 10:06:57.332352 systemd[1]: Started cri-containerd-cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace.scope. Feb 9 10:06:57.360518 systemd[1]: cri-containerd-cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace.scope: Deactivated successfully. Feb 9 10:06:57.362165 env[1141]: time="2024-02-09T10:06:57.362124867Z" level=info msg="StartContainer for \"cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace\" returns successfully" Feb 9 10:06:57.380098 env[1141]: time="2024-02-09T10:06:57.380042880Z" level=info msg="shim disconnected" id=cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace Feb 9 10:06:57.380098 env[1141]: time="2024-02-09T10:06:57.380092924Z" level=warning msg="cleaning up after shim disconnected" id=cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace namespace=k8s.io Feb 9 10:06:57.380098 env[1141]: time="2024-02-09T10:06:57.380102964Z" level=info msg="cleaning up dead shim" Feb 9 10:06:57.387137 env[1141]: time="2024-02-09T10:06:57.387093844Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3353 runtime=io.containerd.runc.v2\n" Feb 9 10:06:58.024442 kubelet[1408]: E0209 10:06:58.024396 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:58.147448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace-rootfs.mount: Deactivated successfully. Feb 9 10:06:58.301744 kubelet[1408]: E0209 10:06:58.301334 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:58.303329 env[1141]: time="2024-02-09T10:06:58.303289380Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:06:58.313293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076671504.mount: Deactivated successfully. Feb 9 10:06:58.316602 env[1141]: time="2024-02-09T10:06:58.316561613Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83\"" Feb 9 10:06:58.317302 env[1141]: time="2024-02-09T10:06:58.317277065Z" level=info msg="StartContainer for \"244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83\"" Feb 9 10:06:58.330958 systemd[1]: Started cri-containerd-244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83.scope. Feb 9 10:06:58.360444 systemd[1]: cri-containerd-244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83.scope: Deactivated successfully. Feb 9 10:06:58.362639 env[1141]: time="2024-02-09T10:06:58.362592718Z" level=info msg="StartContainer for \"244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83\" returns successfully" Feb 9 10:06:58.363530 env[1141]: time="2024-02-09T10:06:58.363359214Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afb8956_cbbb_47b6_bde0_59836d8689fa.slice/cri-containerd-244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83.scope/cgroup.events\": no such file or directory" Feb 9 10:06:58.379589 env[1141]: time="2024-02-09T10:06:58.379548056Z" level=info msg="shim disconnected" id=244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83 Feb 9 10:06:58.379729 env[1141]: time="2024-02-09T10:06:58.379590579Z" level=warning msg="cleaning up after shim disconnected" id=244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83 namespace=k8s.io Feb 9 10:06:58.379729 env[1141]: time="2024-02-09T10:06:58.379600020Z" level=info msg="cleaning up dead shim" Feb 9 10:06:58.385663 env[1141]: time="2024-02-09T10:06:58.385619492Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:06:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3408 runtime=io.containerd.runc.v2\n" Feb 9 10:06:59.025381 kubelet[1408]: E0209 10:06:59.025341 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:06:59.147592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83-rootfs.mount: Deactivated successfully. Feb 9 10:06:59.305320 kubelet[1408]: E0209 10:06:59.305239 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:06:59.307348 env[1141]: time="2024-02-09T10:06:59.307288567Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:06:59.320838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062368594.mount: Deactivated successfully. Feb 9 10:06:59.325505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2648934989.mount: Deactivated successfully. Feb 9 10:06:59.329182 env[1141]: time="2024-02-09T10:06:59.329134243Z" level=info msg="CreateContainer within sandbox \"193f9c21a43a609b1d9aecb5c17421f6ab72ccd554bf9900de37b35cfdc09bd6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce2b189735b1dc9d8eb53a2bc9e6b27ed92fcc01b9d889c0c04872f65a98f56c\"" Feb 9 10:06:59.329675 env[1141]: time="2024-02-09T10:06:59.329643238Z" level=info msg="StartContainer for \"ce2b189735b1dc9d8eb53a2bc9e6b27ed92fcc01b9d889c0c04872f65a98f56c\"" Feb 9 10:06:59.342691 systemd[1]: Started cri-containerd-ce2b189735b1dc9d8eb53a2bc9e6b27ed92fcc01b9d889c0c04872f65a98f56c.scope. Feb 9 10:06:59.373801 env[1141]: time="2024-02-09T10:06:59.373757700Z" level=info msg="StartContainer for \"ce2b189735b1dc9d8eb53a2bc9e6b27ed92fcc01b9d889c0c04872f65a98f56c\" returns successfully" Feb 9 10:06:59.614021 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:06:59.628901 kubelet[1408]: W0209 10:06:59.628864 1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afb8956_cbbb_47b6_bde0_59836d8689fa.slice/cri-containerd-13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc.scope WatchSource:0}: task 13cd041d2658ac0628d77fd08819677370243276a47122d917fe4ef4547a9ffc not found: not found Feb 9 10:07:00.026584 kubelet[1408]: E0209 10:07:00.026484 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:00.310517 kubelet[1408]: E0209 10:07:00.310425 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:00.323108 kubelet[1408]: I0209 10:07:00.323071 1408 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h9s5v" podStartSLOduration=5.323037079 podCreationTimestamp="2024-02-09 10:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:07:00.322182341 +0000 UTC m=+59.884498230" watchObservedRunningTime="2024-02-09 10:07:00.323037079 +0000 UTC m=+59.885352968" Feb 9 10:07:00.987019 kubelet[1408]: E0209 10:07:00.986985 1408 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:00.994071 env[1141]: time="2024-02-09T10:07:00.994032092Z" level=info msg="StopPodSandbox for \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\"" Feb 9 10:07:00.994466 env[1141]: time="2024-02-09T10:07:00.994118218Z" level=info msg="TearDown network for sandbox \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" successfully" Feb 9 10:07:00.994466 env[1141]: time="2024-02-09T10:07:00.994151900Z" level=info msg="StopPodSandbox for \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" returns successfully" Feb 9 10:07:00.994562 env[1141]: time="2024-02-09T10:07:00.994521165Z" level=info msg="RemovePodSandbox for \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\"" Feb 9 10:07:00.994606 env[1141]: time="2024-02-09T10:07:00.994561327Z" level=info msg="Forcibly stopping sandbox \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\"" Feb 9 10:07:00.994647 env[1141]: time="2024-02-09T10:07:00.994627132Z" level=info msg="TearDown network for sandbox \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" successfully" Feb 9 10:07:00.998535 env[1141]: time="2024-02-09T10:07:00.998493431Z" level=info msg="RemovePodSandbox \"0981d4aa65b1b78fc7d0333d5ce61551aab3a5ed915157b0e56f504c81f85a96\" returns successfully" Feb 9 10:07:00.999006 env[1141]: time="2024-02-09T10:07:00.998956743Z" level=info msg="StopPodSandbox for \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\"" Feb 9 10:07:00.999180 env[1141]: time="2024-02-09T10:07:00.999141795Z" level=info msg="TearDown network for sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" successfully" Feb 9 10:07:00.999265 env[1141]: time="2024-02-09T10:07:00.999247882Z" level=info msg="StopPodSandbox for \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" returns successfully" Feb 9 10:07:00.999623 env[1141]: time="2024-02-09T10:07:00.999581345Z" level=info msg="RemovePodSandbox for \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\"" Feb 9 10:07:00.999680 env[1141]: time="2024-02-09T10:07:00.999627748Z" level=info msg="Forcibly stopping sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\"" Feb 9 10:07:00.999707 env[1141]: time="2024-02-09T10:07:00.999688592Z" level=info msg="TearDown network for sandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" successfully" Feb 9 10:07:01.002036 env[1141]: time="2024-02-09T10:07:01.002000305Z" level=info msg="RemovePodSandbox \"8476ae34c76b70e0d41932ad4aa1d8d788150924e124e42acbd689c2c7c44f1c\" returns successfully" Feb 9 10:07:01.027211 kubelet[1408]: E0209 10:07:01.027175 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:01.461737 systemd[1]: run-containerd-runc-k8s.io-ce2b189735b1dc9d8eb53a2bc9e6b27ed92fcc01b9d889c0c04872f65a98f56c-runc.D1K0p3.mount: Deactivated successfully. Feb 9 10:07:01.637938 kubelet[1408]: E0209 10:07:01.637910 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:02.027854 kubelet[1408]: E0209 10:07:02.027823 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:02.274046 systemd-networkd[1044]: lxc_health: Link UP Feb 9 10:07:02.284345 systemd-networkd[1044]: lxc_health: Gained carrier Feb 9 10:07:02.285010 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:07:02.736145 kubelet[1408]: W0209 10:07:02.736086 1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afb8956_cbbb_47b6_bde0_59836d8689fa.slice/cri-containerd-1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332.scope WatchSource:0}: task 1154586673465f263e2d4a6c48b264ddf79e82251fd24f9cbfc7f243b1980332 not found: not found Feb 9 10:07:03.029238 kubelet[1408]: E0209 10:07:03.029114 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:03.644861 kubelet[1408]: E0209 10:07:03.644820 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:04.029518 kubelet[1408]: E0209 10:07:04.029196 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:04.318649 kubelet[1408]: E0209 10:07:04.318307 1408 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 10:07:04.328100 systemd-networkd[1044]: lxc_health: Gained IPv6LL Feb 9 10:07:05.029633 kubelet[1408]: E0209 10:07:05.029595 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:05.745020 systemd[1]: run-containerd-runc-k8s.io-ce2b189735b1dc9d8eb53a2bc9e6b27ed92fcc01b9d889c0c04872f65a98f56c-runc.WM9wT2.mount: Deactivated successfully. Feb 9 10:07:05.841803 kubelet[1408]: W0209 10:07:05.841763 1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afb8956_cbbb_47b6_bde0_59836d8689fa.slice/cri-containerd-cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace.scope WatchSource:0}: task cb8eb7b6909a10c2cc6e4901f42d381adcd37c02723af89b9ca0e2f829998ace not found: not found Feb 9 10:07:06.031149 kubelet[1408]: E0209 10:07:06.031037 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:07.031921 kubelet[1408]: E0209 10:07:07.031875 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:07.931743 systemd[1]: run-containerd-runc-k8s.io-ce2b189735b1dc9d8eb53a2bc9e6b27ed92fcc01b9d889c0c04872f65a98f56c-runc.1xLWrb.mount: Deactivated successfully. Feb 9 10:07:08.033041 kubelet[1408]: E0209 10:07:08.033000 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:07:08.948057 kubelet[1408]: W0209 10:07:08.948019 1408 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1afb8956_cbbb_47b6_bde0_59836d8689fa.slice/cri-containerd-244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83.scope WatchSource:0}: task 244070db38c66b71a0db4a75d0e8299f0198e06f9a502515f921e0535b951c83 not found: not found Feb 9 10:07:09.033589 kubelet[1408]: E0209 10:07:09.033558 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"