Feb 9 09:56:58.024642 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 09:56:58.024660 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:56:58.024668 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 09:56:58.024675 kernel: printk: bootconsole [pl11] enabled Feb 9 09:56:58.024680 kernel: efi: EFI v2.70 by EDK II Feb 9 09:56:58.024686 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 09:56:58.024692 kernel: random: crng init done Feb 9 09:56:58.024697 kernel: ACPI: Early table checksum verification disabled Feb 9 09:56:58.024703 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 09:56:58.024708 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024713 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024720 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 09:56:58.024725 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024731 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024737 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024743 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024749 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024756 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024761 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 09:56:58.024767 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 09:56:58.024773 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 09:56:58.024778 kernel: NUMA: Failed to initialise from firmware Feb 9 09:56:58.024784 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:56:58.024789 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 09:56:58.024795 kernel: Zone ranges: Feb 9 09:56:58.024801 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 09:56:58.024806 kernel: DMA32 empty Feb 9 09:56:58.024813 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:56:58.024819 kernel: Movable zone start for each node Feb 9 09:56:58.024824 kernel: Early memory node ranges Feb 9 09:56:58.024830 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 09:56:58.024836 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 09:56:58.024841 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 09:56:58.024847 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 09:56:58.024852 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 09:56:58.024858 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 09:56:58.024864 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 09:56:58.024869 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 09:56:58.024875 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 09:56:58.024882 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 09:56:58.024890 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 09:56:58.024896 kernel: psci: probing for conduit method from ACPI. Feb 9 09:56:58.024902 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 09:56:58.024908 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:56:58.024915 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 09:56:58.024921 kernel: psci: SMC Calling Convention v1.4 Feb 9 09:56:58.024927 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 09:56:58.024933 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 09:56:58.024939 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:56:58.024945 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:56:58.024951 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:56:58.024957 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:56:58.024963 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:56:58.024969 kernel: CPU features: detected: Hardware dirty bit management Feb 9 09:56:58.024975 kernel: CPU features: detected: Spectre-BHB Feb 9 09:56:58.024981 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:56:58.024988 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:56:58.024994 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 09:56:58.025000 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 09:56:58.025006 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 09:56:58.025012 kernel: Policy zone: Normal Feb 9 09:56:58.025020 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:58.025026 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:56:58.025032 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:56:58.025038 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:56:58.025044 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:56:58.025052 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 09:56:58.025058 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 09:56:58.025064 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:56:58.025070 kernel: trace event string verifier disabled Feb 9 09:56:58.025076 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:56:58.025093 kernel: rcu: RCU event tracing is enabled. Feb 9 09:56:58.025100 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:56:58.025106 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:56:58.025113 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:56:58.025119 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:56:58.025125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:56:58.025132 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:56:58.025138 kernel: GICv3: 960 SPIs implemented Feb 9 09:56:58.025144 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:56:58.025150 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:56:58.025156 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:56:58.025162 kernel: GICv3: 16 PPIs implemented Feb 9 09:56:58.025168 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 09:56:58.025174 kernel: ITS: No ITS available, not enabling LPIs Feb 9 09:56:58.025180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:58.025186 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 09:56:58.025192 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 09:56:58.025199 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 09:56:58.025207 kernel: Console: colour dummy device 80x25 Feb 9 09:56:58.025213 kernel: printk: console [tty1] enabled Feb 9 09:56:58.025219 kernel: ACPI: Core revision 20210730 Feb 9 09:56:58.025226 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 09:56:58.025232 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:56:58.025238 kernel: LSM: Security Framework initializing Feb 9 09:56:58.025244 kernel: SELinux: Initializing. Feb 9 09:56:58.025250 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:58.025257 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:56:58.025264 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 09:56:58.025270 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 09:56:58.025276 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:56:58.025282 kernel: Remapping and enabling EFI services. Feb 9 09:56:58.025289 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:56:58.025295 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:56:58.025301 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 09:56:58.025307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 09:56:58.025313 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 09:56:58.025320 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:56:58.025327 kernel: SMP: Total of 2 processors activated. Feb 9 09:56:58.025333 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:56:58.025339 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 09:56:58.025346 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 09:56:58.025352 kernel: CPU features: detected: CRC32 instructions Feb 9 09:56:58.025358 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 09:56:58.025364 kernel: CPU features: detected: LSE atomic instructions Feb 9 09:56:58.025370 kernel: CPU features: detected: Privileged Access Never Feb 9 09:56:58.025378 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:56:58.025384 kernel: alternatives: patching kernel code Feb 9 09:56:58.025395 kernel: devtmpfs: initialized Feb 9 09:56:58.025402 kernel: KASLR enabled Feb 9 09:56:58.025409 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:56:58.025416 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:56:58.025422 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:56:58.025428 kernel: SMBIOS 3.1.0 present. Feb 9 09:56:58.025435 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 09:56:58.025442 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:56:58.025450 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:56:58.025456 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:56:58.025463 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:56:58.025469 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:56:58.025476 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Feb 9 09:56:58.025482 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:56:58.025489 kernel: cpuidle: using governor menu Feb 9 09:56:58.025497 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:56:58.025503 kernel: ASID allocator initialised with 32768 entries Feb 9 09:56:58.025510 kernel: ACPI: bus type PCI registered Feb 9 09:56:58.025516 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:56:58.025523 kernel: Serial: AMBA PL011 UART driver Feb 9 09:56:58.025529 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:56:58.025536 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:56:58.025542 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:56:58.025549 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:56:58.025556 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:56:58.025563 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:56:58.025570 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:56:58.025576 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:56:58.025583 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:56:58.025589 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:56:58.025596 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:56:58.025602 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:56:58.025608 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:56:58.025616 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:56:58.025622 kernel: ACPI: Interpreter enabled Feb 9 09:56:58.025629 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:56:58.025635 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 09:56:58.025642 kernel: printk: console [ttyAMA0] enabled Feb 9 09:56:58.025648 kernel: printk: bootconsole [pl11] disabled Feb 9 09:56:58.025655 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 09:56:58.025661 kernel: iommu: Default domain type: Translated Feb 9 09:56:58.025668 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:56:58.025676 kernel: vgaarb: loaded Feb 9 09:56:58.025682 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:56:58.025689 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:56:58.025695 kernel: PTP clock support registered Feb 9 09:56:58.025702 kernel: Registered efivars operations Feb 9 09:56:58.025708 kernel: No ACPI PMU IRQ for CPU0 Feb 9 09:56:58.025715 kernel: No ACPI PMU IRQ for CPU1 Feb 9 09:56:58.025721 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:56:58.025727 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:56:58.025735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:56:58.025741 kernel: pnp: PnP ACPI init Feb 9 09:56:58.025748 kernel: pnp: PnP ACPI: found 0 devices Feb 9 09:56:58.025754 kernel: NET: Registered PF_INET protocol family Feb 9 09:56:58.025761 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:56:58.025768 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:56:58.025774 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:56:58.025781 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:56:58.025787 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:56:58.025795 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:56:58.025802 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:58.025809 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:56:58.025815 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:56:58.025822 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:56:58.025828 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 09:56:58.025835 kernel: kvm [1]: HYP mode not available Feb 9 09:56:58.025841 kernel: Initialise system trusted keyrings Feb 9 09:56:58.025848 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:56:58.025856 kernel: Key type asymmetric registered Feb 9 09:56:58.025862 kernel: Asymmetric key parser 'x509' registered Feb 9 09:56:58.025868 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:56:58.025875 kernel: io scheduler mq-deadline registered Feb 9 09:56:58.025882 kernel: io scheduler kyber registered Feb 9 09:56:58.025888 kernel: io scheduler bfq registered Feb 9 09:56:58.025894 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:56:58.025901 kernel: thunder_xcv, ver 1.0 Feb 9 09:56:58.025907 kernel: thunder_bgx, ver 1.0 Feb 9 09:56:58.025915 kernel: nicpf, ver 1.0 Feb 9 09:56:58.025921 kernel: nicvf, ver 1.0 Feb 9 09:56:58.026029 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:56:58.029130 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:56:57 UTC (1707472617) Feb 9 09:56:58.029150 kernel: efifb: probing for efifb Feb 9 09:56:58.029157 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 09:56:58.029164 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 09:56:58.029171 kernel: efifb: scrolling: redraw Feb 9 09:56:58.029182 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 09:56:58.029188 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:56:58.029195 kernel: fb0: EFI VGA frame buffer device Feb 9 09:56:58.029202 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 09:56:58.029209 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:56:58.029215 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:56:58.029222 kernel: Segment Routing with IPv6 Feb 9 09:56:58.029229 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:56:58.029236 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:56:58.029244 kernel: Key type dns_resolver registered Feb 9 09:56:58.029251 kernel: registered taskstats version 1 Feb 9 09:56:58.029257 kernel: Loading compiled-in X.509 certificates Feb 9 09:56:58.029264 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:56:58.029271 kernel: Key type .fscrypt registered Feb 9 09:56:58.029277 kernel: Key type fscrypt-provisioning registered Feb 9 09:56:58.029284 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:56:58.029291 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:56:58.029297 kernel: ima: No architecture policies found Feb 9 09:56:58.029305 kernel: Freeing unused kernel memory: 34688K Feb 9 09:56:58.029312 kernel: Run /init as init process Feb 9 09:56:58.029319 kernel: with arguments: Feb 9 09:56:58.029325 kernel: /init Feb 9 09:56:58.029332 kernel: with environment: Feb 9 09:56:58.029338 kernel: HOME=/ Feb 9 09:56:58.029345 kernel: TERM=linux Feb 9 09:56:58.029351 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:56:58.029360 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:56:58.029371 systemd[1]: Detected virtualization microsoft. Feb 9 09:56:58.029378 systemd[1]: Detected architecture arm64. Feb 9 09:56:58.029385 systemd[1]: Running in initrd. Feb 9 09:56:58.029392 systemd[1]: No hostname configured, using default hostname. Feb 9 09:56:58.029399 systemd[1]: Hostname set to . Feb 9 09:56:58.029406 systemd[1]: Initializing machine ID from random generator. Feb 9 09:56:58.029413 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:56:58.029421 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:56:58.029428 systemd[1]: Reached target cryptsetup.target. Feb 9 09:56:58.029435 systemd[1]: Reached target paths.target. Feb 9 09:56:58.029442 systemd[1]: Reached target slices.target. Feb 9 09:56:58.029450 systemd[1]: Reached target swap.target. Feb 9 09:56:58.029457 systemd[1]: Reached target timers.target. Feb 9 09:56:58.029464 systemd[1]: Listening on iscsid.socket. Feb 9 09:56:58.029471 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:56:58.029479 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:56:58.029486 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:56:58.029494 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:56:58.029501 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:56:58.029508 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:56:58.029515 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:56:58.029522 systemd[1]: Reached target sockets.target. Feb 9 09:56:58.029529 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:56:58.029536 systemd[1]: Finished network-cleanup.service. Feb 9 09:56:58.029545 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:56:58.029552 systemd[1]: Starting systemd-journald.service... Feb 9 09:56:58.029559 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:56:58.029566 systemd[1]: Starting systemd-resolved.service... Feb 9 09:56:58.029576 systemd-journald[275]: Journal started Feb 9 09:56:58.029617 systemd-journald[275]: Runtime Journal (/run/log/journal/3a226d43a5ca4a138c66177102a9fcaa) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:56:58.021419 systemd-modules-load[276]: Inserted module 'overlay' Feb 9 09:56:58.061094 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:56:58.061118 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:56:58.066612 systemd-modules-load[276]: Inserted module 'br_netfilter' Feb 9 09:56:58.075207 kernel: Bridge firewalling registered Feb 9 09:56:58.074068 systemd-resolved[277]: Positive Trust Anchors: Feb 9 09:56:58.120501 systemd[1]: Started systemd-journald.service. Feb 9 09:56:58.120531 kernel: SCSI subsystem initialized Feb 9 09:56:58.120542 kernel: audit: type=1130 audit(1707472618.096:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.074076 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:56:58.156402 kernel: audit: type=1130 audit(1707472618.126:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.074133 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:56:58.081259 systemd-resolved[277]: Defaulting to hostname 'linux'. Feb 9 09:56:58.236290 kernel: audit: type=1130 audit(1707472618.199:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.236311 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:56:58.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.096330 systemd[1]: Started systemd-resolved.service. Feb 9 09:56:58.269492 kernel: audit: type=1130 audit(1707472618.241:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.269520 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:56:58.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.126668 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:56:58.306966 kernel: audit: type=1130 audit(1707472618.275:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.306998 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:56:58.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.199577 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:56:58.242130 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:56:58.275915 systemd[1]: Reached target nss-lookup.target. Feb 9 09:56:58.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.321982 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:56:58.397529 kernel: audit: type=1130 audit(1707472618.349:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.397554 kernel: audit: type=1130 audit(1707472618.374:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.331499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:56:58.336935 systemd-modules-load[276]: Inserted module 'dm_multipath' Feb 9 09:56:58.434455 kernel: audit: type=1130 audit(1707472618.410:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.338367 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:56:58.349484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:56:58.398401 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:56:58.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.495928 dracut-cmdline[296]: dracut-dracut-053 Feb 9 09:56:58.495928 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Feb 9 09:56:58.495928 dracut-cmdline[296]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:56:58.543190 kernel: audit: type=1130 audit(1707472618.470:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.434698 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:56:58.442475 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:58.464473 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:58.600119 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:56:58.613116 kernel: iscsi: registered transport (tcp) Feb 9 09:56:58.634798 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:56:58.634846 kernel: QLogic iSCSI HBA Driver Feb 9 09:56:58.663796 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:56:58.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:58.669912 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:56:58.725103 kernel: raid6: neonx8 gen() 13828 MB/s Feb 9 09:56:58.746095 kernel: raid6: neonx8 xor() 10846 MB/s Feb 9 09:56:58.767099 kernel: raid6: neonx4 gen() 13548 MB/s Feb 9 09:56:58.789093 kernel: raid6: neonx4 xor() 11315 MB/s Feb 9 09:56:58.810112 kernel: raid6: neonx2 gen() 12936 MB/s Feb 9 09:56:58.831094 kernel: raid6: neonx2 xor() 10249 MB/s Feb 9 09:56:58.853092 kernel: raid6: neonx1 gen() 10494 MB/s Feb 9 09:56:58.874093 kernel: raid6: neonx1 xor() 8813 MB/s Feb 9 09:56:58.896096 kernel: raid6: int64x8 gen() 6298 MB/s Feb 9 09:56:58.917094 kernel: raid6: int64x8 xor() 3547 MB/s Feb 9 09:56:58.938095 kernel: raid6: int64x4 gen() 7256 MB/s Feb 9 09:56:58.960095 kernel: raid6: int64x4 xor() 3853 MB/s Feb 9 09:56:58.981092 kernel: raid6: int64x2 gen() 6156 MB/s Feb 9 09:56:59.002092 kernel: raid6: int64x2 xor() 3322 MB/s Feb 9 09:56:59.024093 kernel: raid6: int64x1 gen() 5041 MB/s Feb 9 09:56:59.048869 kernel: raid6: int64x1 xor() 2647 MB/s Feb 9 09:56:59.048881 kernel: raid6: using algorithm neonx8 gen() 13828 MB/s Feb 9 09:56:59.048889 kernel: raid6: .... xor() 10846 MB/s, rmw enabled Feb 9 09:56:59.053467 kernel: raid6: using neon recovery algorithm Feb 9 09:56:59.073095 kernel: xor: measuring software checksum speed Feb 9 09:56:59.082427 kernel: 8regs : 17289 MB/sec Feb 9 09:56:59.082437 kernel: 32regs : 20723 MB/sec Feb 9 09:56:59.087115 kernel: arm64_neon : 27987 MB/sec Feb 9 09:56:59.087125 kernel: xor: using function: arm64_neon (27987 MB/sec) Feb 9 09:56:59.148100 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:56:59.157281 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:56:59.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.166000 audit: BPF prog-id=7 op=LOAD Feb 9 09:56:59.167000 audit: BPF prog-id=8 op=LOAD Feb 9 09:56:59.167602 systemd[1]: Starting systemd-udevd.service... Feb 9 09:56:59.184170 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 09:56:59.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.190261 systemd[1]: Started systemd-udevd.service. Feb 9 09:56:59.202484 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:56:59.216622 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 09:56:59.248683 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:56:59.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.254619 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:56:59.289186 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:56:59.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:56:59.349114 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 09:56:59.358330 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 09:56:59.359124 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 09:56:59.361108 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 09:56:59.362114 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 09:56:59.362140 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 09:56:59.365111 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 09:56:59.384613 kernel: scsi host0: storvsc_host_t Feb 9 09:56:59.433036 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 09:56:59.433076 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 09:56:59.433127 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 09:56:59.439104 kernel: scsi host1: storvsc_host_t Feb 9 09:56:59.460236 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 09:56:59.460473 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 09:56:59.462102 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 09:56:59.480523 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 09:56:59.480720 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:56:59.484915 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:56:59.494201 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 09:56:59.494381 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 09:56:59.505182 kernel: hv_netvsc 000d3ac2-a296-000d-3ac2-a296000d3ac2 eth0: VF slot 1 added Feb 9 09:56:59.505385 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:56:59.520116 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:56:59.520287 kernel: hv_vmbus: registering driver hv_pci Feb 9 09:56:59.535109 kernel: hv_pci 35ba3d04-1124-4a7c-be13-ad005141b6aa: PCI VMBus probing: Using version 0x10004 Feb 9 09:56:59.552065 kernel: hv_pci 35ba3d04-1124-4a7c-be13-ad005141b6aa: PCI host bridge to bus 1124:00 Feb 9 09:56:59.552232 kernel: pci_bus 1124:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 09:56:59.552327 kernel: pci_bus 1124:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 09:56:59.566451 kernel: pci 1124:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 09:56:59.579272 kernel: pci 1124:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:56:59.601709 kernel: pci 1124:00:02.0: enabling Extended Tags Feb 9 09:56:59.633811 kernel: pci 1124:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1124:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 09:56:59.634012 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (540) Feb 9 09:56:59.634022 kernel: pci_bus 1124:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 09:56:59.646435 kernel: pci 1124:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 09:56:59.665010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:56:59.681698 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:56:59.704671 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:56:59.724774 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:56:59.732009 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:56:59.757240 systemd[1]: Starting disk-uuid.service... Feb 9 09:56:59.776009 kernel: mlx5_core 1124:00:02.0: firmware version: 16.30.1284 Feb 9 09:56:59.786513 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:56:59.798114 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:57:00.028382 kernel: mlx5_core 1124:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 09:57:00.082111 kernel: hv_netvsc 000d3ac2-a296-000d-3ac2-a296000d3ac2 eth0: VF registering: eth1 Feb 9 09:57:00.082284 kernel: mlx5_core 1124:00:02.0 eth1: joined to eth0 Feb 9 09:57:00.101119 kernel: mlx5_core 1124:00:02.0 enP4388s1: renamed from eth1 Feb 9 09:57:00.799103 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 09:57:00.799226 disk-uuid[596]: The operation has completed successfully. Feb 9 09:57:00.862599 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:57:00.864211 systemd[1]: Finished disk-uuid.service. Feb 9 09:57:00.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:00.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:00.877024 systemd[1]: Starting verity-setup.service... Feb 9 09:57:00.906101 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:57:00.984720 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:57:00.990418 systemd[1]: Finished verity-setup.service. Feb 9 09:57:00.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.000936 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:57:01.064105 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:57:01.064336 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:57:01.068781 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:57:01.069536 systemd[1]: Starting ignition-setup.service... Feb 9 09:57:01.077548 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:57:01.119146 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:57:01.119203 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:57:01.119213 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:57:01.158442 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:57:01.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.168000 audit: BPF prog-id=9 op=LOAD Feb 9 09:57:01.168757 systemd[1]: Starting systemd-networkd.service... Feb 9 09:57:01.190315 systemd-networkd[844]: lo: Link UP Feb 9 09:57:01.190325 systemd-networkd[844]: lo: Gained carrier Feb 9 09:57:01.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.190997 systemd-networkd[844]: Enumeration completed Feb 9 09:57:01.193658 systemd[1]: Started systemd-networkd.service. Feb 9 09:57:01.194303 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:57:01.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.198371 systemd[1]: Reached target network.target. Feb 9 09:57:01.237493 iscsid[852]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:57:01.237493 iscsid[852]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:57:01.237493 iscsid[852]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:57:01.237493 iscsid[852]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:57:01.237493 iscsid[852]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:57:01.237493 iscsid[852]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:57:01.237493 iscsid[852]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:57:01.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.207394 systemd[1]: Starting iscsiuio.service... Feb 9 09:57:01.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.219499 systemd[1]: Started iscsiuio.service. Feb 9 09:57:01.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.224597 systemd[1]: Starting iscsid.service... Feb 9 09:57:01.241534 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:57:01.242019 systemd[1]: Started iscsid.service. Feb 9 09:57:01.269657 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:57:01.313448 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:57:01.325512 systemd[1]: Finished ignition-setup.service. Feb 9 09:57:01.333280 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:57:01.341795 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:57:01.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:01.350468 systemd[1]: Reached target remote-fs.target. Feb 9 09:57:01.362727 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:57:01.379111 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:57:01.389572 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:57:01.446112 kernel: mlx5_core 1124:00:02.0 enP4388s1: Link up Feb 9 09:57:01.488119 kernel: hv_netvsc 000d3ac2-a296-000d-3ac2-a296000d3ac2 eth0: Data path switched to VF: enP4388s1 Feb 9 09:57:01.495205 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:57:01.495577 systemd-networkd[844]: enP4388s1: Link UP Feb 9 09:57:01.495665 systemd-networkd[844]: eth0: Link UP Feb 9 09:57:01.495782 systemd-networkd[844]: eth0: Gained carrier Feb 9 09:57:01.505526 systemd-networkd[844]: enP4388s1: Gained carrier Feb 9 09:57:01.522183 systemd-networkd[844]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:57:02.029772 ignition[868]: Ignition 2.14.0 Feb 9 09:57:02.033497 ignition[868]: Stage: fetch-offline Feb 9 09:57:02.033585 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:57:02.033610 ignition[868]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:57:02.065670 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:57:02.065860 ignition[868]: parsed url from cmdline: "" Feb 9 09:57:02.073429 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:57:02.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.065864 ignition[868]: no config URL provided Feb 9 09:57:02.112591 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 09:57:02.112615 kernel: audit: type=1130 audit(1707472622.078:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.088830 systemd[1]: Starting ignition-fetch.service... Feb 9 09:57:02.065870 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:57:02.065878 ignition[868]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:57:02.065887 ignition[868]: failed to fetch config: resource requires networking Feb 9 09:57:02.066318 ignition[868]: Ignition finished successfully Feb 9 09:57:02.095912 ignition[874]: Ignition 2.14.0 Feb 9 09:57:02.095919 ignition[874]: Stage: fetch Feb 9 09:57:02.096034 ignition[874]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:57:02.096060 ignition[874]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:57:02.099014 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:57:02.117524 ignition[874]: parsed url from cmdline: "" Feb 9 09:57:02.117533 ignition[874]: no config URL provided Feb 9 09:57:02.117541 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:57:02.117555 ignition[874]: no config at "/usr/lib/ignition/user.ign" Feb 9 09:57:02.117591 ignition[874]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 09:57:02.222533 ignition[874]: GET result: OK Feb 9 09:57:02.222634 ignition[874]: config has been read from IMDS userdata Feb 9 09:57:02.222676 ignition[874]: parsing config with SHA512: baea04780c67e615a83a8bbb8f48e6dccefa660202090268d89415ba131061c225885419743c90e08f79112753017ca95ffc1552f05b3fac1e1a84843cdf0f80 Feb 9 09:57:02.238741 unknown[874]: fetched base config from "system" Feb 9 09:57:02.238759 unknown[874]: fetched base config from "system" Feb 9 09:57:02.239427 ignition[874]: fetch: fetch complete Feb 9 09:57:02.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.273108 kernel: audit: type=1130 audit(1707472622.252:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.238765 unknown[874]: fetched user config from "azure" Feb 9 09:57:02.239436 ignition[874]: fetch: fetch passed Feb 9 09:57:02.244212 systemd[1]: Finished ignition-fetch.service. Feb 9 09:57:02.239480 ignition[874]: Ignition finished successfully Feb 9 09:57:02.253958 systemd[1]: Starting ignition-kargs.service... Feb 9 09:57:02.286750 ignition[880]: Ignition 2.14.0 Feb 9 09:57:02.300359 systemd[1]: Finished ignition-kargs.service. Feb 9 09:57:02.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.286757 ignition[880]: Stage: kargs Feb 9 09:57:02.286864 ignition[880]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:57:02.346225 kernel: audit: type=1130 audit(1707472622.307:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.327516 systemd[1]: Starting ignition-disks.service... Feb 9 09:57:02.370781 kernel: audit: type=1130 audit(1707472622.346:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.286883 ignition[880]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:57:02.341864 systemd[1]: Finished ignition-disks.service. Feb 9 09:57:02.290623 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:57:02.346530 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:57:02.292910 ignition[880]: kargs: kargs passed Feb 9 09:57:02.370985 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:57:02.292952 ignition[880]: Ignition finished successfully Feb 9 09:57:02.379696 systemd[1]: Reached target local-fs.target. Feb 9 09:57:02.333945 ignition[886]: Ignition 2.14.0 Feb 9 09:57:02.389714 systemd[1]: Reached target sysinit.target. Feb 9 09:57:02.333952 ignition[886]: Stage: disks Feb 9 09:57:02.398786 systemd[1]: Reached target basic.target. Feb 9 09:57:02.334051 ignition[886]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:57:02.410538 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:57:02.334069 ignition[886]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:57:02.337656 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:57:02.340580 ignition[886]: disks: disks passed Feb 9 09:57:02.340630 ignition[886]: Ignition finished successfully Feb 9 09:57:02.474758 systemd-fsck[894]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 09:57:02.484121 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:57:02.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.489667 systemd[1]: Mounting sysroot.mount... Feb 9 09:57:02.517942 kernel: audit: type=1130 audit(1707472622.488:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.531115 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:57:02.531917 systemd[1]: Mounted sysroot.mount. Feb 9 09:57:02.539392 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:57:02.551104 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:57:02.555771 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:57:02.563650 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:57:02.563681 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:57:02.569817 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:57:02.588321 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:57:02.600111 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:57:02.615720 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:57:02.628831 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (904) Feb 9 09:57:02.640459 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:57:02.640566 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:57:02.645212 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:57:02.647311 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:57:02.658021 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:57:02.667916 initrd-setup-root[943]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:57:02.676816 initrd-setup-root[951]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:57:02.695277 systemd-networkd[844]: eth0: Gained IPv6LL Feb 9 09:57:02.796638 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:57:02.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.802621 systemd[1]: Starting ignition-mount.service... Feb 9 09:57:02.830107 kernel: audit: type=1130 audit(1707472622.801:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.837304 systemd[1]: Starting sysroot-boot.service... Feb 9 09:57:02.844365 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:57:02.844533 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:57:02.862722 ignition[972]: INFO : Ignition 2.14.0 Feb 9 09:57:02.862722 ignition[972]: INFO : Stage: mount Feb 9 09:57:02.874260 ignition[972]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:57:02.874260 ignition[972]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:57:02.874260 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:57:02.874260 ignition[972]: INFO : mount: mount passed Feb 9 09:57:02.874260 ignition[972]: INFO : Ignition finished successfully Feb 9 09:57:02.967534 kernel: audit: type=1130 audit(1707472622.905:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.967560 kernel: audit: type=1130 audit(1707472622.938:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:02.886709 systemd[1]: Finished ignition-mount.service. Feb 9 09:57:02.907476 systemd[1]: Finished sysroot-boot.service. Feb 9 09:57:03.048253 coreos-metadata[903]: Feb 09 09:57:03.048 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:57:03.059225 coreos-metadata[903]: Feb 09 09:57:03.059 INFO Fetch successful Feb 9 09:57:03.092011 coreos-metadata[903]: Feb 09 09:57:03.091 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:57:03.106815 coreos-metadata[903]: Feb 09 09:57:03.106 INFO Fetch successful Feb 9 09:57:03.112395 coreos-metadata[903]: Feb 09 09:57:03.112 INFO wrote hostname ci-3510.3.2-a-ac6bbec117 to /sysroot/etc/hostname Feb 9 09:57:03.122582 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:57:03.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:03.160286 kernel: audit: type=1130 audit(1707472623.128:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:03.154639 systemd[1]: Starting ignition-files.service... Feb 9 09:57:03.166536 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:57:03.198169 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (984) Feb 9 09:57:03.198228 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:57:03.198238 kernel: BTRFS info (device sda6): using free space tree Feb 9 09:57:03.209761 kernel: BTRFS info (device sda6): has skinny extents Feb 9 09:57:03.214042 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:57:03.232273 ignition[1003]: INFO : Ignition 2.14.0 Feb 9 09:57:03.236630 ignition[1003]: INFO : Stage: files Feb 9 09:57:03.236630 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:57:03.236630 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:57:03.266513 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:57:03.266513 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:57:03.266513 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:57:03.266513 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:57:03.298854 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:57:03.298854 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:57:03.298854 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:57:03.298854 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:57:03.298854 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 09:57:03.272179 unknown[1003]: wrote ssh authorized keys file for user: core Feb 9 09:57:03.733300 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:57:03.947324 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 09:57:03.966509 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 09:57:03.966509 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:57:03.966509 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:57:04.324451 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:57:04.456675 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 09:57:04.474661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 09:57:04.474661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:57:04.474661 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:57:04.667337 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:57:04.964686 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 09:57:04.981872 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:57:04.981872 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:57:04.981872 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:57:05.039372 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:57:05.706982 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:57:05.725480 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:57:05.833909 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1005) Feb 9 09:57:05.781852 systemd[1]: mnt-oem2831841693.mount: Deactivated successfully. Feb 9 09:57:05.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2831841693" Feb 9 09:57:05.859205 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2831841693": device or resource busy Feb 9 09:57:05.859205 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2831841693", trying btrfs: device or resource busy Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2831841693" Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2831841693" Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2831841693" Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2831841693" Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2167775211" Feb 9 09:57:05.859205 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2167775211": device or resource busy Feb 9 09:57:05.859205 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2167775211", trying btrfs: device or resource busy Feb 9 09:57:05.859205 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2167775211" Feb 9 09:57:06.084309 kernel: audit: type=1130 audit(1707472625.839:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.824460 systemd[1]: Finished ignition-files.service. Feb 9 09:57:06.089445 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2167775211" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem2167775211" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem2167775211" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 9 09:57:06.089445 ignition[1003]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:57:06.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.862690 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:57:06.376429 ignition[1003]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:57:06.376429 ignition[1003]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:57:06.376429 ignition[1003]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:57:06.376429 ignition[1003]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:57:06.376429 ignition[1003]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:57:06.376429 ignition[1003]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:57:06.376429 ignition[1003]: INFO : files: files passed Feb 9 09:57:06.376429 ignition[1003]: INFO : Ignition finished successfully Feb 9 09:57:06.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.867814 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:57:06.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.531163 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:57:06.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.868612 systemd[1]: Starting ignition-quench.service... Feb 9 09:57:06.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.886836 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:57:06.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.918617 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:57:06.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.918720 systemd[1]: Finished ignition-quench.service. Feb 9 09:57:06.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:05.936560 systemd[1]: Reached target ignition-complete.target. Feb 9 09:57:05.955053 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:57:05.994169 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:57:06.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.621822 ignition[1041]: INFO : Ignition 2.14.0 Feb 9 09:57:06.621822 ignition[1041]: INFO : Stage: umount Feb 9 09:57:06.621822 ignition[1041]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:57:06.621822 ignition[1041]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 09:57:06.621822 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 09:57:06.621822 ignition[1041]: INFO : umount: umount passed Feb 9 09:57:06.621822 ignition[1041]: INFO : Ignition finished successfully Feb 9 09:57:05.994276 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:57:06.007892 systemd[1]: Reached target initrd-fs.target. Feb 9 09:57:06.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.023318 systemd[1]: Reached target initrd.target. Feb 9 09:57:06.040001 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:57:06.040909 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:57:06.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.089654 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:57:06.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.105486 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:57:06.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.129982 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:57:06.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.779000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:57:06.143422 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:57:06.154685 systemd[1]: Stopped target timers.target. Feb 9 09:57:06.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.165268 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:57:06.165389 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:57:06.176202 systemd[1]: Stopped target initrd.target. Feb 9 09:57:06.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.188076 systemd[1]: Stopped target basic.target. Feb 9 09:57:06.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.200053 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:57:06.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.218454 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:57:06.237554 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:57:06.251212 systemd[1]: Stopped target remote-fs.target. Feb 9 09:57:06.263825 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:57:06.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.283494 systemd[1]: Stopped target sysinit.target. Feb 9 09:57:06.300572 systemd[1]: Stopped target local-fs.target. Feb 9 09:57:06.314041 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:57:06.937765 kernel: hv_netvsc 000d3ac2-a296-000d-3ac2-a296000d3ac2 eth0: Data path switched from VF: enP4388s1 Feb 9 09:57:06.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.328215 systemd[1]: Stopped target swap.target. Feb 9 09:57:06.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.342253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:57:06.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.342403 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:57:06.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.355956 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:57:06.370275 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:57:06.370420 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:57:06.381971 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:57:06.382115 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:57:06.397856 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:57:06.397982 systemd[1]: Stopped ignition-files.service. Feb 9 09:57:06.412593 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:57:06.412723 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:57:06.429200 systemd[1]: Stopping ignition-mount.service... Feb 9 09:57:06.443939 systemd[1]: Stopping iscsiuio.service... Feb 9 09:57:06.450415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:57:06.450645 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:57:06.486873 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:57:07.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:06.503495 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:57:06.503712 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:57:06.509580 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:57:06.509681 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:57:06.530178 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:57:06.530760 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:57:06.543615 systemd[1]: Stopped iscsiuio.service. Feb 9 09:57:06.548965 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:57:06.549059 systemd[1]: Stopped ignition-mount.service. Feb 9 09:57:06.559470 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:57:07.145852 systemd-journald[275]: Received SIGTERM from PID 1 (systemd). Feb 9 09:57:07.145889 iscsid[852]: iscsid shutting down. Feb 9 09:57:06.559568 systemd[1]: Stopped ignition-disks.service. Feb 9 09:57:06.570778 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:57:06.570928 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:57:06.577059 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:57:06.577210 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:57:06.589205 systemd[1]: Stopped target network.target. Feb 9 09:57:06.604412 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:57:06.604558 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:57:06.615649 systemd[1]: Stopped target paths.target. Feb 9 09:57:06.626911 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:57:06.635913 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:57:06.649773 systemd[1]: Stopped target slices.target. Feb 9 09:57:06.670157 systemd[1]: Stopped target sockets.target. Feb 9 09:57:06.683501 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:57:06.683612 systemd[1]: Closed iscsid.socket. Feb 9 09:57:06.692628 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:57:06.692786 systemd[1]: Closed iscsiuio.socket. Feb 9 09:57:06.704835 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:57:06.704976 systemd[1]: Stopped ignition-setup.service. Feb 9 09:57:06.715316 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:57:06.724394 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:57:06.735530 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:57:06.735619 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:57:06.742221 systemd-networkd[844]: eth0: DHCPv6 lease lost Feb 9 09:57:07.146000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:57:06.749283 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:57:06.749376 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:57:06.760133 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:57:06.760234 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:57:06.770729 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:57:06.770813 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:57:06.780880 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:57:06.780919 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:57:06.790720 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:57:06.790768 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:57:06.805503 systemd[1]: Stopping network-cleanup.service... Feb 9 09:57:06.816893 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:57:06.816965 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:57:06.827195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:57:06.827250 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:57:06.841899 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:57:06.841945 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:57:06.852559 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:57:06.861582 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:57:06.871477 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:57:06.871636 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:57:06.882410 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:57:06.882464 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:57:06.893009 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:57:06.893051 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:57:06.902832 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:57:06.902879 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:57:06.921557 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:57:06.921616 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:57:06.931966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:57:06.932016 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:57:06.942316 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:57:06.947747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:57:06.947817 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:57:06.960114 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:57:06.960215 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:57:07.056902 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:57:07.057014 systemd[1]: Stopped network-cleanup.service. Feb 9 09:57:07.066678 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:57:07.077270 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:57:07.095020 systemd[1]: Switching root. Feb 9 09:57:07.147767 systemd-journald[275]: Journal stopped Feb 9 09:57:11.092933 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 9 09:57:11.092953 kernel: audit: type=1334 audit(1707472627.146:79): prog-id=9 op=UNLOAD Feb 9 09:57:11.092964 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:57:11.092974 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:57:11.092983 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:57:11.092993 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:57:11.093002 kernel: SELinux: policy capability open_perms=1 Feb 9 09:57:11.093010 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:57:11.093019 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:57:11.093027 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:57:11.093037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:57:11.093045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:57:11.093053 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:57:11.093062 kernel: audit: type=1403 audit(1707472627.691:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:57:11.093073 systemd[1]: Successfully loaded SELinux policy in 138.833ms. Feb 9 09:57:11.093096 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.832ms. Feb 9 09:57:11.093108 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:57:11.093117 systemd[1]: Detected virtualization microsoft. Feb 9 09:57:11.093127 systemd[1]: Detected architecture arm64. Feb 9 09:57:11.093136 systemd[1]: Detected first boot. Feb 9 09:57:11.093145 systemd[1]: Hostname set to . Feb 9 09:57:11.093156 systemd[1]: Initializing machine ID from random generator. Feb 9 09:57:11.093166 kernel: audit: type=1400 audit(1707472627.979:81): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:57:11.093175 kernel: audit: type=1400 audit(1707472627.979:82): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:57:11.093184 kernel: audit: type=1334 audit(1707472627.997:83): prog-id=10 op=LOAD Feb 9 09:57:11.093193 kernel: audit: type=1334 audit(1707472627.997:84): prog-id=10 op=UNLOAD Feb 9 09:57:11.093202 kernel: audit: type=1334 audit(1707472628.016:85): prog-id=11 op=LOAD Feb 9 09:57:11.093211 kernel: audit: type=1334 audit(1707472628.016:86): prog-id=11 op=UNLOAD Feb 9 09:57:11.093220 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:57:11.093231 kernel: audit: type=1400 audit(1707472628.320:87): avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:57:11.093241 kernel: audit: type=1300 audit(1707472628.320:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:11.093251 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:57:11.093260 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:57:11.093271 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:57:11.093281 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:57:11.093292 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:57:11.093302 systemd[1]: Stopped iscsid.service. Feb 9 09:57:11.093312 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:57:11.093321 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:57:11.093331 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:57:11.093343 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:57:11.093352 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:57:11.093362 systemd[1]: Created slice system-getty.slice. Feb 9 09:57:11.093373 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:57:11.093383 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:57:11.093393 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:57:11.093402 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:57:11.093413 systemd[1]: Created slice user.slice. Feb 9 09:57:11.093423 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:57:11.093432 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:57:11.093442 systemd[1]: Set up automount boot.automount. Feb 9 09:57:11.093451 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:57:11.093463 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:57:11.093473 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:57:11.093483 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:57:11.093492 systemd[1]: Reached target integritysetup.target. Feb 9 09:57:11.093502 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:57:11.093512 systemd[1]: Reached target remote-fs.target. Feb 9 09:57:11.093521 systemd[1]: Reached target slices.target. Feb 9 09:57:11.093531 systemd[1]: Reached target swap.target. Feb 9 09:57:11.093542 systemd[1]: Reached target torcx.target. Feb 9 09:57:11.093551 systemd[1]: Reached target veritysetup.target. Feb 9 09:57:11.093561 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:57:11.093571 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:57:11.093582 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:57:11.093592 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:57:11.093601 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:57:11.093611 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:57:11.093622 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:57:11.093632 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:57:11.093642 systemd[1]: Mounting media.mount... Feb 9 09:57:11.093651 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:57:11.093661 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:57:11.093672 systemd[1]: Mounting tmp.mount... Feb 9 09:57:11.093682 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:57:11.093692 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:57:11.093702 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:57:11.093711 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:57:11.093721 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:57:11.093731 systemd[1]: Starting modprobe@drm.service... Feb 9 09:57:11.093740 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:57:11.093750 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:57:11.093761 systemd[1]: Starting modprobe@loop.service... Feb 9 09:57:11.093771 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:57:11.093781 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:57:11.093791 kernel: fuse: init (API version 7.34) Feb 9 09:57:11.093800 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:57:11.093809 kernel: loop: module loaded Feb 9 09:57:11.093819 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:57:11.093830 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:57:11.093839 systemd[1]: Stopped systemd-journald.service. Feb 9 09:57:11.093850 systemd[1]: systemd-journald.service: Consumed 3.104s CPU time. Feb 9 09:57:11.093860 systemd[1]: Starting systemd-journald.service... Feb 9 09:57:11.093870 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:57:11.093879 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:57:11.093889 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:57:11.093899 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:57:11.093909 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:57:11.093919 systemd[1]: Stopped verity-setup.service. Feb 9 09:57:11.093931 systemd-journald[1182]: Journal started Feb 9 09:57:11.093969 systemd-journald[1182]: Runtime Journal (/run/log/journal/5e28bbdce4bc46c5a492042246e10700) is 8.0M, max 78.6M, 70.6M free. Feb 9 09:57:07.691000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:57:07.979000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:57:07.979000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:57:07.997000 audit: BPF prog-id=10 op=LOAD Feb 9 09:57:07.997000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:57:08.016000 audit: BPF prog-id=11 op=LOAD Feb 9 09:57:08.016000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:57:08.320000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:57:08.320000 audit[1075]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458a2 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:08.320000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:57:08.330000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:57:08.330000 audit[1075]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145979 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:08.330000 audit: CWD cwd="/" Feb 9 09:57:08.330000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:08.330000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:08.330000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:57:10.497000 audit: BPF prog-id=12 op=LOAD Feb 9 09:57:10.497000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:57:10.497000 audit: BPF prog-id=13 op=LOAD Feb 9 09:57:10.497000 audit: BPF prog-id=14 op=LOAD Feb 9 09:57:10.497000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:57:10.497000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:57:10.498000 audit: BPF prog-id=15 op=LOAD Feb 9 09:57:10.498000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:57:10.498000 audit: BPF prog-id=16 op=LOAD Feb 9 09:57:10.498000 audit: BPF prog-id=17 op=LOAD Feb 9 09:57:10.498000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:57:10.498000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:57:10.498000 audit: BPF prog-id=18 op=LOAD Feb 9 09:57:10.498000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:57:10.498000 audit: BPF prog-id=19 op=LOAD Feb 9 09:57:10.498000 audit: BPF prog-id=20 op=LOAD Feb 9 09:57:10.498000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:57:10.498000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:57:10.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:10.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:10.514000 audit: BPF prog-id=18 op=UNLOAD Feb 9 09:57:10.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:10.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:10.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.026000 audit: BPF prog-id=21 op=LOAD Feb 9 09:57:11.026000 audit: BPF prog-id=22 op=LOAD Feb 9 09:57:11.026000 audit: BPF prog-id=23 op=LOAD Feb 9 09:57:11.026000 audit: BPF prog-id=19 op=UNLOAD Feb 9 09:57:11.026000 audit: BPF prog-id=20 op=UNLOAD Feb 9 09:57:11.090000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:57:11.090000 audit[1182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe3e060f0 a2=4000 a3=1 items=0 ppid=1 pid=1182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:11.090000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:57:11.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:08.308118 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:57:10.495767 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:57:08.316182 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:57:10.499418 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:57:08.316202 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:57:10.499737 systemd[1]: systemd-journald.service: Consumed 3.104s CPU time. Feb 9 09:57:08.316238 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:57:08.316248 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:57:08.316279 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:57:08.316291 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:57:08.316495 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:57:08.316527 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:57:08.316538 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:57:08.316924 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:57:08.316956 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:57:08.316973 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:57:08.316987 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:57:08.317004 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:57:08.317017 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:57:10.025433 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:10Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:57:10.025701 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:10Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:57:10.025792 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:10Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:57:10.025942 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:10Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:57:10.025987 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:10Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:57:10.026042 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2024-02-09T09:57:10Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:57:11.105654 systemd[1]: Started systemd-journald.service. Feb 9 09:57:11.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.106514 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:57:11.110517 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:57:11.114802 systemd[1]: Mounted media.mount. Feb 9 09:57:11.118792 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:57:11.122992 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:57:11.127349 systemd[1]: Mounted tmp.mount. Feb 9 09:57:11.131013 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:57:11.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.136159 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:57:11.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.140929 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:57:11.141055 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:57:11.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.146073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:57:11.146203 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:57:11.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.151294 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:57:11.151411 systemd[1]: Finished modprobe@drm.service. Feb 9 09:57:11.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.156259 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:57:11.156374 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:57:11.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.161667 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:57:11.161785 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:57:11.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.166463 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:57:11.166576 systemd[1]: Finished modprobe@loop.service. Feb 9 09:57:11.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.171262 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:57:11.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.176464 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:57:11.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.182347 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:57:11.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.187454 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:57:11.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.192772 systemd[1]: Reached target network-pre.target. Feb 9 09:57:11.198750 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:57:11.204436 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:57:11.208759 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:57:11.213279 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:57:11.218546 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:57:11.222714 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:57:11.223680 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:57:11.228139 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:57:11.229126 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:57:11.234705 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:57:11.239747 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:57:11.242409 systemd-journald[1182]: Time spent on flushing to /var/log/journal/5e28bbdce4bc46c5a492042246e10700 is 18.844ms for 1093 entries. Feb 9 09:57:11.242409 systemd-journald[1182]: System Journal (/var/log/journal/5e28bbdce4bc46c5a492042246e10700) is 8.0M, max 2.6G, 2.6G free. Feb 9 09:57:11.319455 systemd-journald[1182]: Received client request to flush runtime journal. Feb 9 09:57:11.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.252063 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:57:11.257336 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:57:11.319891 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:57:11.266849 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:57:11.272592 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:57:11.286910 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:57:11.320414 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:57:11.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.408812 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:57:11.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.713969 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:57:11.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.719000 audit: BPF prog-id=24 op=LOAD Feb 9 09:57:11.719000 audit: BPF prog-id=25 op=LOAD Feb 9 09:57:11.719000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:57:11.719000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:57:11.720449 systemd[1]: Starting systemd-udevd.service... Feb 9 09:57:11.738382 systemd-udevd[1199]: Using default interface naming scheme 'v252'. Feb 9 09:57:11.807020 systemd[1]: Started systemd-udevd.service. Feb 9 09:57:11.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.817000 audit: BPF prog-id=26 op=LOAD Feb 9 09:57:11.818640 systemd[1]: Starting systemd-networkd.service... Feb 9 09:57:11.841821 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 09:57:11.845000 audit: BPF prog-id=27 op=LOAD Feb 9 09:57:11.845000 audit: BPF prog-id=28 op=LOAD Feb 9 09:57:11.845000 audit: BPF prog-id=29 op=LOAD Feb 9 09:57:11.846128 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:57:11.890556 systemd[1]: Started systemd-userdbd.service. Feb 9 09:57:11.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.921113 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:57:11.920000 audit[1203]: AVC avc: denied { confidentiality } for pid=1203 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:57:11.936117 kernel: hv_vmbus: registering driver hv_balloon Feb 9 09:57:11.947662 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 09:57:11.947749 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 09:57:11.920000 audit[1203]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf3296e50 a1=aa2c a2=ffffafee24b0 a3=aaaaf31f8010 items=12 ppid=1199 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:11.920000 audit: CWD cwd="/" Feb 9 09:57:11.920000 audit: PATH item=0 name=(null) inode=6316 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=1 name=(null) inode=9967 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=2 name=(null) inode=9967 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=3 name=(null) inode=9968 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=4 name=(null) inode=9967 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=5 name=(null) inode=9969 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=6 name=(null) inode=9967 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=7 name=(null) inode=9970 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=8 name=(null) inode=9967 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=9 name=(null) inode=9971 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=10 name=(null) inode=9967 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PATH item=11 name=(null) inode=9972 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:57:11.920000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:57:11.980159 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 09:57:11.980259 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 09:57:11.993470 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 09:57:11.993645 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 09:57:12.003757 kernel: Console: switching to colour dummy device 80x25 Feb 9 09:57:12.009108 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:57:12.009183 kernel: hv_vmbus: registering driver hv_utils Feb 9 09:57:12.018671 systemd-networkd[1220]: lo: Link UP Feb 9 09:57:12.028301 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 09:57:12.028335 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 09:57:12.028353 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 09:57:12.018683 systemd-networkd[1220]: lo: Gained carrier Feb 9 09:57:12.019052 systemd-networkd[1220]: Enumeration completed Feb 9 09:57:12.019164 systemd[1]: Started systemd-networkd.service. Feb 9 09:57:11.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.836864 systemd-journald[1182]: Time jumped backwards, rotating. Feb 9 09:57:11.836966 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1217) Feb 9 09:57:11.836986 kernel: mlx5_core 1124:00:02.0 enP4388s1: Link up Feb 9 09:57:11.771131 systemd-networkd[1220]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:57:11.772072 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:57:11.843823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:57:11.862017 systemd-networkd[1220]: enP4388s1: Link UP Feb 9 09:57:11.862103 systemd-networkd[1220]: eth0: Link UP Feb 9 09:57:11.862106 systemd-networkd[1220]: eth0: Gained carrier Feb 9 09:57:11.862317 kernel: hv_netvsc 000d3ac2-a296-000d-3ac2-a296000d3ac2 eth0: Data path switched to VF: enP4388s1 Feb 9 09:57:11.870529 systemd-networkd[1220]: enP4388s1: Gained carrier Feb 9 09:57:11.876399 systemd-networkd[1220]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:57:11.897779 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:57:11.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.904018 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:57:11.906560 kernel: kauditd_printk_skb: 94 callbacks suppressed Feb 9 09:57:11.906593 kernel: audit: type=1130 audit(1707472631.902:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:11.970069 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:57:11.999128 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:57:12.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:12.004711 systemd[1]: Reached target cryptsetup.target. Feb 9 09:57:12.022309 kernel: audit: type=1130 audit(1707472632.004:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:12.027952 systemd[1]: Starting lvm2-activation.service... Feb 9 09:57:12.032261 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:57:12.063224 systemd[1]: Finished lvm2-activation.service. Feb 9 09:57:12.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:12.073462 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:57:12.089225 kernel: audit: type=1130 audit(1707472632.068:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:12.089723 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:57:12.089840 systemd[1]: Reached target local-fs.target. Feb 9 09:57:12.094144 systemd[1]: Reached target machines.target. Feb 9 09:57:12.099630 systemd[1]: Starting ldconfig.service... Feb 9 09:57:12.103451 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:57:12.103611 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:57:12.104847 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:57:12.111470 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:57:12.117922 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:57:12.122853 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:57:12.122906 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:57:12.123940 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:57:12.131432 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1281 (bootctl) Feb 9 09:57:12.132434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:57:12.170427 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:57:12.409268 kernel: audit: type=1130 audit(1707472632.176:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:12.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:12.551629 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:57:12.920381 systemd-networkd[1220]: eth0: Gained IPv6LL Feb 9 09:57:12.925171 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:57:12.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:12.950306 kernel: audit: type=1130 audit(1707472632.931:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.077313 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:57:13.078141 systemd-fsck[1289]: fsck.fat 4.2 (2021-01-31) Feb 9 09:57:13.078141 systemd-fsck[1289]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 09:57:13.081655 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:57:13.090903 systemd[1]: Mounting boot.mount... Feb 9 09:57:13.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.120377 kernel: audit: type=1130 audit(1707472633.089:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.129157 systemd[1]: Mounted boot.mount. Feb 9 09:57:13.139339 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:57:13.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.154810 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:57:13.165323 kernel: audit: type=1130 audit(1707472633.144:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.340881 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:57:13.341467 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:57:13.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.368348 kernel: audit: type=1130 audit(1707472633.346:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.467607 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:57:13.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.492440 kernel: audit: type=1130 audit(1707472633.472:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.492840 systemd[1]: Starting audit-rules.service... Feb 9 09:57:13.497289 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:57:13.503502 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:57:13.511000 audit: BPF prog-id=30 op=LOAD Feb 9 09:57:13.512998 systemd[1]: Starting systemd-resolved.service... Feb 9 09:57:13.522391 kernel: audit: type=1334 audit(1707472633.511:168): prog-id=30 op=LOAD Feb 9 09:57:13.523000 audit: BPF prog-id=31 op=LOAD Feb 9 09:57:13.524770 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:57:13.530742 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:57:13.535387 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:57:13.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.540489 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:57:13.550000 audit[1308]: SYSTEM_BOOT pid=1308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.555608 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:57:13.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.587664 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:57:13.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.607648 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:57:13.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:57:13.612978 systemd[1]: Reached target time-set.target. Feb 9 09:57:13.639794 systemd-resolved[1306]: Positive Trust Anchors: Feb 9 09:57:13.639808 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:57:13.639837 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:57:13.647000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:57:13.647000 audit[1318]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffefa164e0 a2=420 a3=0 items=0 ppid=1297 pid=1318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:57:13.647000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:57:13.651235 augenrules[1318]: No rules Feb 9 09:57:13.652186 systemd[1]: Finished audit-rules.service. Feb 9 09:57:13.660518 systemd-resolved[1306]: Using system hostname 'ci-3510.3.2-a-ac6bbec117'. Feb 9 09:57:13.662076 systemd[1]: Started systemd-resolved.service. Feb 9 09:57:13.667164 systemd[1]: Reached target network.target. Feb 9 09:57:13.671514 systemd[1]: Reached target network-online.target. Feb 9 09:57:13.676261 systemd[1]: Reached target nss-lookup.target. Feb 9 09:57:13.881092 systemd-timesyncd[1307]: Contacted time server 108.61.56.35:123 (0.flatcar.pool.ntp.org). Feb 9 09:57:13.881513 systemd-timesyncd[1307]: Initial clock synchronization to Fri 2024-02-09 09:57:13.886752 UTC. Feb 9 09:57:14.993475 ldconfig[1280]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:57:15.005457 systemd[1]: Finished ldconfig.service. Feb 9 09:57:15.011541 systemd[1]: Starting systemd-update-done.service... Feb 9 09:57:15.027654 systemd[1]: Finished systemd-update-done.service. Feb 9 09:57:15.032494 systemd[1]: Reached target sysinit.target. Feb 9 09:57:15.036909 systemd[1]: Started motdgen.path. Feb 9 09:57:15.041018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:57:15.048848 systemd[1]: Started logrotate.timer. Feb 9 09:57:15.052742 systemd[1]: Started mdadm.timer. Feb 9 09:57:15.058097 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:57:15.062782 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:57:15.062815 systemd[1]: Reached target paths.target. Feb 9 09:57:15.066822 systemd[1]: Reached target timers.target. Feb 9 09:57:15.071668 systemd[1]: Listening on dbus.socket. Feb 9 09:57:15.076879 systemd[1]: Starting docker.socket... Feb 9 09:57:15.082934 systemd[1]: Listening on sshd.socket. Feb 9 09:57:15.087027 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:57:15.087486 systemd[1]: Listening on docker.socket. Feb 9 09:57:15.091597 systemd[1]: Reached target sockets.target. Feb 9 09:57:15.095860 systemd[1]: Reached target basic.target. Feb 9 09:57:15.099969 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:57:15.099995 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:57:15.100999 systemd[1]: Starting containerd.service... Feb 9 09:57:15.106530 systemd[1]: Starting dbus.service... Feb 9 09:57:15.110570 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:57:15.115671 systemd[1]: Starting extend-filesystems.service... Feb 9 09:57:15.120105 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:57:15.121067 systemd[1]: Starting motdgen.service... Feb 9 09:57:15.125986 systemd[1]: Started nvidia.service. Feb 9 09:57:15.131071 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:57:15.131576 jq[1328]: false Feb 9 09:57:15.136738 systemd[1]: Starting prepare-critools.service... Feb 9 09:57:15.141630 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:57:15.147161 systemd[1]: Starting sshd-keygen.service... Feb 9 09:57:15.157499 systemd[1]: Starting systemd-logind.service... Feb 9 09:57:15.161227 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:57:15.161285 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:57:15.161690 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:57:15.162506 systemd[1]: Starting update-engine.service... Feb 9 09:57:15.164795 extend-filesystems[1329]: Found sda Feb 9 09:57:15.183686 extend-filesystems[1329]: Found sda1 Feb 9 09:57:15.183686 extend-filesystems[1329]: Found sda2 Feb 9 09:57:15.183686 extend-filesystems[1329]: Found sda3 Feb 9 09:57:15.183686 extend-filesystems[1329]: Found usr Feb 9 09:57:15.183686 extend-filesystems[1329]: Found sda4 Feb 9 09:57:15.183686 extend-filesystems[1329]: Found sda6 Feb 9 09:57:15.183686 extend-filesystems[1329]: Found sda7 Feb 9 09:57:15.183686 extend-filesystems[1329]: Found sda9 Feb 9 09:57:15.183686 extend-filesystems[1329]: Checking size of /dev/sda9 Feb 9 09:57:15.169432 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:57:15.236107 dbus-daemon[1327]: [system] SELinux support is enabled Feb 9 09:57:15.355572 extend-filesystems[1329]: Old size kept for /dev/sda9 Feb 9 09:57:15.355572 extend-filesystems[1329]: Found sr0 Feb 9 09:57:15.176241 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:57:15.309469 dbus-daemon[1327]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:57:15.177430 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:57:15.403280 jq[1349]: true Feb 9 09:57:15.181776 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:57:15.181933 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:57:15.403672 tar[1353]: ./ Feb 9 09:57:15.403672 tar[1353]: ./macvlan Feb 9 09:57:15.403672 tar[1353]: ./static Feb 9 09:57:15.194720 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:57:15.405180 tar[1354]: crictl Feb 9 09:57:15.194871 systemd[1]: Finished motdgen.service. Feb 9 09:57:15.405501 jq[1359]: true Feb 9 09:57:15.224950 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:57:15.405678 env[1360]: time="2024-02-09T09:57:15.350879304Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:57:15.225117 systemd[1]: Finished extend-filesystems.service. Feb 9 09:57:15.250606 systemd[1]: Started dbus.service. Feb 9 09:57:15.409073 bash[1386]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:57:15.264110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:57:15.264135 systemd[1]: Reached target system-config.target. Feb 9 09:57:15.287701 systemd-logind[1344]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 09:57:15.292270 systemd-logind[1344]: New seat seat0. Feb 9 09:57:15.297112 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:57:15.297133 systemd[1]: Reached target user-config.target. Feb 9 09:57:15.308324 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:57:15.309636 systemd[1]: Started systemd-logind.service. Feb 9 09:57:15.354229 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:57:15.432882 update_engine[1346]: I0209 09:57:15.428664 1346 main.cc:92] Flatcar Update Engine starting Feb 9 09:57:15.446880 tar[1353]: ./vlan Feb 9 09:57:15.448970 systemd[1]: Started update-engine.service. Feb 9 09:57:15.456275 update_engine[1346]: I0209 09:57:15.456226 1346 update_check_scheduler.cc:74] Next update check in 7m25s Feb 9 09:57:15.459777 systemd[1]: Started locksmithd.service. Feb 9 09:57:15.475965 env[1360]: time="2024-02-09T09:57:15.475926192Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:57:15.476179 env[1360]: time="2024-02-09T09:57:15.476160751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:57:15.478566 env[1360]: time="2024-02-09T09:57:15.478525190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:57:15.478566 env[1360]: time="2024-02-09T09:57:15.478563283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:57:15.478815 env[1360]: time="2024-02-09T09:57:15.478783837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:57:15.478815 env[1360]: time="2024-02-09T09:57:15.478811166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:57:15.478879 env[1360]: time="2024-02-09T09:57:15.478827052Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:57:15.478879 env[1360]: time="2024-02-09T09:57:15.478837575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:57:15.478935 env[1360]: time="2024-02-09T09:57:15.478913521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:57:15.479151 env[1360]: time="2024-02-09T09:57:15.479125753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:57:15.479276 env[1360]: time="2024-02-09T09:57:15.479251035Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:57:15.479276 env[1360]: time="2024-02-09T09:57:15.479271842Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:57:15.479379 env[1360]: time="2024-02-09T09:57:15.479354430Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:57:15.479379 env[1360]: time="2024-02-09T09:57:15.479376598Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:57:15.498787 env[1360]: time="2024-02-09T09:57:15.498725775Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:57:15.498898 env[1360]: time="2024-02-09T09:57:15.498796439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:57:15.498898 env[1360]: time="2024-02-09T09:57:15.498813604Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:57:15.498975 env[1360]: time="2024-02-09T09:57:15.498915399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.498975 env[1360]: time="2024-02-09T09:57:15.498943929Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.498975 env[1360]: time="2024-02-09T09:57:15.498960734Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.499034 env[1360]: time="2024-02-09T09:57:15.498977220Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.499402 env[1360]: time="2024-02-09T09:57:15.499378155Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.499433 env[1360]: time="2024-02-09T09:57:15.499407245Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.499433 env[1360]: time="2024-02-09T09:57:15.499422970Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.499481 env[1360]: time="2024-02-09T09:57:15.499436015Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.499481 env[1360]: time="2024-02-09T09:57:15.499459343Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:57:15.499648 env[1360]: time="2024-02-09T09:57:15.499624799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:57:15.499748 env[1360]: time="2024-02-09T09:57:15.499728153Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:57:15.500021 env[1360]: time="2024-02-09T09:57:15.499991963Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:57:15.500057 env[1360]: time="2024-02-09T09:57:15.500031696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500057 env[1360]: time="2024-02-09T09:57:15.500045701Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:57:15.500163 env[1360]: time="2024-02-09T09:57:15.500104521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500163 env[1360]: time="2024-02-09T09:57:15.500122927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500163 env[1360]: time="2024-02-09T09:57:15.500135811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500163 env[1360]: time="2024-02-09T09:57:15.500147535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500255 env[1360]: time="2024-02-09T09:57:15.500169303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500255 env[1360]: time="2024-02-09T09:57:15.500183067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500255 env[1360]: time="2024-02-09T09:57:15.500194311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500255 env[1360]: time="2024-02-09T09:57:15.500204995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500255 env[1360]: time="2024-02-09T09:57:15.500219199Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:57:15.500413 env[1360]: time="2024-02-09T09:57:15.500389537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500445 env[1360]: time="2024-02-09T09:57:15.500415186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500445 env[1360]: time="2024-02-09T09:57:15.500428230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500445 env[1360]: time="2024-02-09T09:57:15.500439514Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:57:15.500507 env[1360]: time="2024-02-09T09:57:15.500462802Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:57:15.500507 env[1360]: time="2024-02-09T09:57:15.500475046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:57:15.500507 env[1360]: time="2024-02-09T09:57:15.500493412Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:57:15.500568 env[1360]: time="2024-02-09T09:57:15.500542909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:57:15.500819 env[1360]: time="2024-02-09T09:57:15.500751659Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:57:15.505970 env[1360]: time="2024-02-09T09:57:15.500819962Z" level=info msg="Connect containerd service" Feb 9 09:57:15.505970 env[1360]: time="2024-02-09T09:57:15.500863177Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:57:15.505970 env[1360]: time="2024-02-09T09:57:15.504612764Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:57:15.505970 env[1360]: time="2024-02-09T09:57:15.504885856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:57:15.505970 env[1360]: time="2024-02-09T09:57:15.504943035Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:57:15.505069 systemd[1]: Started containerd.service. Feb 9 09:57:15.511310 env[1360]: time="2024-02-09T09:57:15.509431272Z" level=info msg="containerd successfully booted in 0.172768s" Feb 9 09:57:15.511310 env[1360]: time="2024-02-09T09:57:15.509493013Z" level=info msg="Start subscribing containerd event" Feb 9 09:57:15.511310 env[1360]: time="2024-02-09T09:57:15.509537588Z" level=info msg="Start recovering state" Feb 9 09:57:15.511310 env[1360]: time="2024-02-09T09:57:15.509597408Z" level=info msg="Start event monitor" Feb 9 09:57:15.511310 env[1360]: time="2024-02-09T09:57:15.509635741Z" level=info msg="Start snapshots syncer" Feb 9 09:57:15.511310 env[1360]: time="2024-02-09T09:57:15.509646064Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:57:15.511310 env[1360]: time="2024-02-09T09:57:15.509654067Z" level=info msg="Start streaming server" Feb 9 09:57:15.521463 tar[1353]: ./portmap Feb 9 09:57:15.553993 tar[1353]: ./host-local Feb 9 09:57:15.579909 tar[1353]: ./vrf Feb 9 09:57:15.608774 tar[1353]: ./bridge Feb 9 09:57:15.641741 tar[1353]: ./tuning Feb 9 09:57:15.668972 tar[1353]: ./firewall Feb 9 09:57:15.703098 tar[1353]: ./host-device Feb 9 09:57:15.733560 tar[1353]: ./sbr Feb 9 09:57:15.761025 tar[1353]: ./loopback Feb 9 09:57:15.788191 tar[1353]: ./dhcp Feb 9 09:57:15.863742 tar[1353]: ./ptp Feb 9 09:57:15.897477 tar[1353]: ./ipvlan Feb 9 09:57:15.929134 tar[1353]: ./bandwidth Feb 9 09:57:16.012219 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:57:16.026510 systemd[1]: Finished prepare-critools.service. Feb 9 09:57:16.617628 locksmithd[1415]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:57:17.309469 sshd_keygen[1352]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:57:17.325662 systemd[1]: Finished sshd-keygen.service. Feb 9 09:57:17.331257 systemd[1]: Starting issuegen.service... Feb 9 09:57:17.335890 systemd[1]: Started waagent.service. Feb 9 09:57:17.340231 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:57:17.340395 systemd[1]: Finished issuegen.service. Feb 9 09:57:17.345420 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:57:17.359951 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:57:17.366329 systemd[1]: Started getty@tty1.service. Feb 9 09:57:17.374544 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 09:57:17.379436 systemd[1]: Reached target getty.target. Feb 9 09:57:17.383628 systemd[1]: Reached target multi-user.target. Feb 9 09:57:17.389371 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:57:17.401151 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:57:17.401321 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:57:17.406450 systemd[1]: Startup finished in 735ms (kernel) + 9.887s (initrd) + 10.149s (userspace) = 20.773s. Feb 9 09:57:18.215922 login[1442]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 09:57:18.219080 login[1441]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:57:18.240432 systemd[1]: Created slice user-500.slice. Feb 9 09:57:18.241500 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:57:18.244677 systemd-logind[1344]: New session 1 of user core. Feb 9 09:57:18.467720 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:57:18.469229 systemd[1]: Starting user@500.service... Feb 9 09:57:18.559665 (systemd)[1445]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:57:18.684065 systemd[1445]: Queued start job for default target default.target. Feb 9 09:57:18.684586 systemd[1445]: Reached target paths.target. Feb 9 09:57:18.684605 systemd[1445]: Reached target sockets.target. Feb 9 09:57:18.684616 systemd[1445]: Reached target timers.target. Feb 9 09:57:18.684626 systemd[1445]: Reached target basic.target. Feb 9 09:57:18.684721 systemd[1]: Started user@500.service. Feb 9 09:57:18.685366 systemd[1445]: Reached target default.target. Feb 9 09:57:18.685408 systemd[1445]: Startup finished in 119ms. Feb 9 09:57:18.685594 systemd[1]: Started session-1.scope. Feb 9 09:57:19.217648 login[1442]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:57:19.221580 systemd-logind[1344]: New session 2 of user core. Feb 9 09:57:19.221985 systemd[1]: Started session-2.scope. Feb 9 09:57:26.155982 waagent[1439]: 2024-02-09T09:57:26.155880Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 09:57:26.171740 waagent[1439]: 2024-02-09T09:57:26.171650Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 09:57:26.176449 waagent[1439]: 2024-02-09T09:57:26.176385Z INFO Daemon Daemon Python: 3.9.16 Feb 9 09:57:26.180878 waagent[1439]: 2024-02-09T09:57:26.180800Z INFO Daemon Daemon Run daemon Feb 9 09:57:26.185312 waagent[1439]: 2024-02-09T09:57:26.185246Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 09:57:26.202279 waagent[1439]: 2024-02-09T09:57:26.202151Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:57:26.218216 waagent[1439]: 2024-02-09T09:57:26.218086Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:57:26.228412 waagent[1439]: 2024-02-09T09:57:26.228323Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:57:26.233568 waagent[1439]: 2024-02-09T09:57:26.233488Z INFO Daemon Daemon Using waagent for provisioning Feb 9 09:57:26.239285 waagent[1439]: 2024-02-09T09:57:26.239217Z INFO Daemon Daemon Activate resource disk Feb 9 09:57:26.243979 waagent[1439]: 2024-02-09T09:57:26.243914Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 09:57:26.258255 waagent[1439]: 2024-02-09T09:57:26.258174Z INFO Daemon Daemon Found device: None Feb 9 09:57:26.317315 waagent[1439]: 2024-02-09T09:57:26.317217Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 09:57:26.358583 waagent[1439]: 2024-02-09T09:57:26.325595Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 09:57:26.359109 waagent[1439]: 2024-02-09T09:57:26.359025Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:57:26.365036 waagent[1439]: 2024-02-09T09:57:26.364949Z INFO Daemon Daemon Running default provisioning handler Feb 9 09:57:26.377876 waagent[1439]: 2024-02-09T09:57:26.377739Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 09:57:26.392481 waagent[1439]: 2024-02-09T09:57:26.392351Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 09:57:26.401844 waagent[1439]: 2024-02-09T09:57:26.401770Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 09:57:26.407214 waagent[1439]: 2024-02-09T09:57:26.407080Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 09:57:28.773158 waagent[1439]: 2024-02-09T09:57:28.773016Z INFO Daemon Daemon Successfully mounted dvd Feb 9 09:57:29.106810 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 09:57:29.123115 waagent[1439]: 2024-02-09T09:57:29.122980Z INFO Daemon Daemon Detect protocol endpoint Feb 9 09:57:29.128452 waagent[1439]: 2024-02-09T09:57:29.128365Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 09:57:29.134642 waagent[1439]: 2024-02-09T09:57:29.134567Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 09:57:29.141212 waagent[1439]: 2024-02-09T09:57:29.141142Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 09:57:29.157230 waagent[1439]: 2024-02-09T09:57:29.146677Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 09:57:29.157230 waagent[1439]: 2024-02-09T09:57:29.151774Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 09:57:29.527701 waagent[1439]: 2024-02-09T09:57:29.527624Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 09:57:29.534981 waagent[1439]: 2024-02-09T09:57:29.534936Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 09:57:29.540330 waagent[1439]: 2024-02-09T09:57:29.540259Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 09:57:30.011056 waagent[1439]: 2024-02-09T09:57:30.010896Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 09:57:30.028457 waagent[1439]: 2024-02-09T09:57:30.028380Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 09:57:30.034268 waagent[1439]: 2024-02-09T09:57:30.034195Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 09:57:30.114657 waagent[1439]: 2024-02-09T09:57:30.114515Z INFO Daemon Daemon Found private key matching thumbprint BDFD9307EBF38CD35FB5F01C7C7F1B2DA98BBE3E Feb 9 09:57:30.123002 waagent[1439]: 2024-02-09T09:57:30.122914Z INFO Daemon Daemon Certificate with thumbprint F74B3C3F2FDF3D5E6BF9DF1BB7094A1705ED887E has no matching private key. Feb 9 09:57:30.132548 waagent[1439]: 2024-02-09T09:57:30.132468Z INFO Daemon Daemon Fetch goal state completed Feb 9 09:57:30.163957 waagent[1439]: 2024-02-09T09:57:30.163901Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 4d2eba2b-393f-484e-85cc-238a2a6748de New eTag: 9984268629092537101] Feb 9 09:57:30.174389 waagent[1439]: 2024-02-09T09:57:30.174310Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:57:30.194875 waagent[1439]: 2024-02-09T09:57:30.194793Z INFO Daemon Daemon Starting provisioning Feb 9 09:57:30.200048 waagent[1439]: 2024-02-09T09:57:30.199970Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 09:57:30.204720 waagent[1439]: 2024-02-09T09:57:30.204655Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-ac6bbec117] Feb 9 09:57:30.224875 waagent[1439]: 2024-02-09T09:57:30.224748Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-ac6bbec117] Feb 9 09:57:30.231887 waagent[1439]: 2024-02-09T09:57:30.231793Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 09:57:30.238752 waagent[1439]: 2024-02-09T09:57:30.238663Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 09:57:30.255541 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 09:57:30.255722 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 09:57:30.255785 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 09:57:30.256037 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:57:30.260337 systemd-networkd[1220]: eth0: DHCPv6 lease lost Feb 9 09:57:30.262237 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:57:30.262431 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:57:30.264641 systemd[1]: Starting systemd-networkd.service... Feb 9 09:57:30.292104 systemd-networkd[1489]: enP4388s1: Link UP Feb 9 09:57:30.292114 systemd-networkd[1489]: enP4388s1: Gained carrier Feb 9 09:57:30.292993 systemd-networkd[1489]: eth0: Link UP Feb 9 09:57:30.293003 systemd-networkd[1489]: eth0: Gained carrier Feb 9 09:57:30.293386 systemd-networkd[1489]: lo: Link UP Feb 9 09:57:30.293394 systemd-networkd[1489]: lo: Gained carrier Feb 9 09:57:30.293628 systemd-networkd[1489]: eth0: Gained IPv6LL Feb 9 09:57:30.294513 systemd-networkd[1489]: Enumeration completed Feb 9 09:57:30.294622 systemd[1]: Started systemd-networkd.service. Feb 9 09:57:30.296770 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:57:30.298883 waagent[1439]: 2024-02-09T09:57:30.298740Z INFO Daemon Daemon Create user account if not exists Feb 9 09:57:30.300828 systemd-networkd[1489]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:57:30.304943 waagent[1439]: 2024-02-09T09:57:30.304844Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 09:57:30.310668 waagent[1439]: 2024-02-09T09:57:30.310591Z INFO Daemon Daemon Configure sudoer Feb 9 09:57:30.315605 waagent[1439]: 2024-02-09T09:57:30.315535Z INFO Daemon Daemon Configure sshd Feb 9 09:57:30.320105 waagent[1439]: 2024-02-09T09:57:30.320032Z INFO Daemon Daemon Deploy ssh public key. Feb 9 09:57:30.330496 systemd-networkd[1489]: eth0: DHCPv4 address 10.200.20.10/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 09:57:30.338235 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:57:31.530648 waagent[1439]: 2024-02-09T09:57:31.530577Z INFO Daemon Daemon Provisioning complete Feb 9 09:57:31.588148 waagent[1439]: 2024-02-09T09:57:31.588082Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 09:57:31.595299 waagent[1439]: 2024-02-09T09:57:31.595208Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 09:57:31.608073 waagent[1439]: 2024-02-09T09:57:31.607987Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 09:57:31.912065 waagent[1498]: 2024-02-09T09:57:31.911970Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 09:57:31.913196 waagent[1498]: 2024-02-09T09:57:31.913136Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:57:31.913460 waagent[1498]: 2024-02-09T09:57:31.913409Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:57:31.930038 waagent[1498]: 2024-02-09T09:57:31.929947Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 09:57:31.930394 waagent[1498]: 2024-02-09T09:57:31.930342Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 09:57:32.001725 waagent[1498]: 2024-02-09T09:57:32.001583Z INFO ExtHandler ExtHandler Found private key matching thumbprint BDFD9307EBF38CD35FB5F01C7C7F1B2DA98BBE3E Feb 9 09:57:32.002105 waagent[1498]: 2024-02-09T09:57:32.002052Z INFO ExtHandler ExtHandler Certificate with thumbprint F74B3C3F2FDF3D5E6BF9DF1BB7094A1705ED887E has no matching private key. Feb 9 09:57:32.002467 waagent[1498]: 2024-02-09T09:57:32.002415Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 09:57:32.017779 waagent[1498]: 2024-02-09T09:57:32.017720Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 0a66c783-7369-43fe-8f49-90569725d9b7 New eTag: 9984268629092537101] Feb 9 09:57:32.018595 waagent[1498]: 2024-02-09T09:57:32.018537Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 09:57:32.219603 waagent[1498]: 2024-02-09T09:57:32.219413Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:57:32.230145 waagent[1498]: 2024-02-09T09:57:32.230060Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1498 Feb 9 09:57:32.234073 waagent[1498]: 2024-02-09T09:57:32.234002Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:57:32.235562 waagent[1498]: 2024-02-09T09:57:32.235504Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:57:32.267231 waagent[1498]: 2024-02-09T09:57:32.267172Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:57:32.267817 waagent[1498]: 2024-02-09T09:57:32.267761Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:57:32.275938 waagent[1498]: 2024-02-09T09:57:32.275883Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:57:32.276673 waagent[1498]: 2024-02-09T09:57:32.276615Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:57:32.277955 waagent[1498]: 2024-02-09T09:57:32.277894Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 09:57:32.279449 waagent[1498]: 2024-02-09T09:57:32.279381Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:57:32.279825 waagent[1498]: 2024-02-09T09:57:32.279753Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:57:32.280282 waagent[1498]: 2024-02-09T09:57:32.280216Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:57:32.280897 waagent[1498]: 2024-02-09T09:57:32.280830Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:57:32.281218 waagent[1498]: 2024-02-09T09:57:32.281159Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:57:32.281218 waagent[1498]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:57:32.281218 waagent[1498]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:57:32.281218 waagent[1498]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:57:32.281218 waagent[1498]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:57:32.281218 waagent[1498]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:57:32.281218 waagent[1498]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:57:32.283612 waagent[1498]: 2024-02-09T09:57:32.283444Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:57:32.283956 waagent[1498]: 2024-02-09T09:57:32.283885Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:57:32.284259 waagent[1498]: 2024-02-09T09:57:32.284193Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:57:32.285391 waagent[1498]: 2024-02-09T09:57:32.285312Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:57:32.285570 waagent[1498]: 2024-02-09T09:57:32.285518Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:57:32.285686 waagent[1498]: 2024-02-09T09:57:32.285644Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:57:32.286670 waagent[1498]: 2024-02-09T09:57:32.286601Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:57:32.286842 waagent[1498]: 2024-02-09T09:57:32.286774Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:57:32.287610 waagent[1498]: 2024-02-09T09:57:32.287514Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:57:32.287811 waagent[1498]: 2024-02-09T09:57:32.287742Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:57:32.288114 waagent[1498]: 2024-02-09T09:57:32.288045Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:57:32.306481 waagent[1498]: 2024-02-09T09:57:32.306411Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 09:57:32.307402 waagent[1498]: 2024-02-09T09:57:32.307350Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:57:32.308529 waagent[1498]: 2024-02-09T09:57:32.308474Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 09:57:32.312716 waagent[1498]: 2024-02-09T09:57:32.312641Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1489' Feb 9 09:57:32.327425 waagent[1498]: 2024-02-09T09:57:32.327271Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:57:32.327425 waagent[1498]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:57:32.327425 waagent[1498]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:57:32.327425 waagent[1498]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:a2:96 brd ff:ff:ff:ff:ff:ff Feb 9 09:57:32.327425 waagent[1498]: 3: enP4388s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:a2:96 brd ff:ff:ff:ff:ff:ff\ altname enP4388p0s2 Feb 9 09:57:32.327425 waagent[1498]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:57:32.327425 waagent[1498]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:57:32.327425 waagent[1498]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:57:32.327425 waagent[1498]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:57:32.327425 waagent[1498]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:57:32.327425 waagent[1498]: 2: eth0 inet6 fe80::20d:3aff:fec2:a296/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:57:32.336791 waagent[1498]: 2024-02-09T09:57:32.336722Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 09:57:32.703479 waagent[1498]: 2024-02-09T09:57:32.703393Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 09:57:32.704705 waagent[1498]: 2024-02-09T09:57:32.704635Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 09:57:33.612661 waagent[1439]: 2024-02-09T09:57:33.612542Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 09:57:33.616311 waagent[1439]: 2024-02-09T09:57:33.616247Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 09:57:34.739153 waagent[1538]: 2024-02-09T09:57:34.739054Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 09:57:34.740168 waagent[1538]: 2024-02-09T09:57:34.740112Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 09:57:34.740411 waagent[1538]: 2024-02-09T09:57:34.740363Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 09:57:34.748102 waagent[1538]: 2024-02-09T09:57:34.747996Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 09:57:34.748656 waagent[1538]: 2024-02-09T09:57:34.748601Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:57:34.748901 waagent[1538]: 2024-02-09T09:57:34.748852Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:57:34.766557 waagent[1538]: 2024-02-09T09:57:34.766477Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 09:57:34.775891 waagent[1538]: 2024-02-09T09:57:34.775830Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 09:57:34.777117 waagent[1538]: 2024-02-09T09:57:34.777059Z INFO ExtHandler Feb 9 09:57:34.777400 waagent[1538]: 2024-02-09T09:57:34.777347Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: dd1af8c7-1942-412a-986d-09c012eab1aa eTag: 9984268629092537101 source: Fabric] Feb 9 09:57:34.778227 waagent[1538]: 2024-02-09T09:57:34.778171Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 09:57:34.779554 waagent[1538]: 2024-02-09T09:57:34.779495Z INFO ExtHandler Feb 9 09:57:34.779776 waagent[1538]: 2024-02-09T09:57:34.779728Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 09:57:34.786537 waagent[1538]: 2024-02-09T09:57:34.786488Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 09:57:34.787151 waagent[1538]: 2024-02-09T09:57:34.787106Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 09:57:34.815108 waagent[1538]: 2024-02-09T09:57:34.815046Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 09:57:34.888502 waagent[1538]: 2024-02-09T09:57:34.888368Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F74B3C3F2FDF3D5E6BF9DF1BB7094A1705ED887E', 'hasPrivateKey': False} Feb 9 09:57:34.889748 waagent[1538]: 2024-02-09T09:57:34.889685Z INFO ExtHandler Downloaded certificate {'thumbprint': 'BDFD9307EBF38CD35FB5F01C7C7F1B2DA98BBE3E', 'hasPrivateKey': True} Feb 9 09:57:34.890899 waagent[1538]: 2024-02-09T09:57:34.890838Z INFO ExtHandler Fetch goal state completed Feb 9 09:57:34.914636 waagent[1538]: 2024-02-09T09:57:34.914564Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1538 Feb 9 09:57:34.918236 waagent[1538]: 2024-02-09T09:57:34.918171Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 09:57:34.919826 waagent[1538]: 2024-02-09T09:57:34.919769Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 09:57:34.924669 waagent[1538]: 2024-02-09T09:57:34.924619Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 09:57:34.925160 waagent[1538]: 2024-02-09T09:57:34.925106Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 09:57:34.932717 waagent[1538]: 2024-02-09T09:57:34.932664Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 09:57:34.933321 waagent[1538]: 2024-02-09T09:57:34.933248Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 09:57:34.953767 waagent[1538]: 2024-02-09T09:57:34.953636Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 09:57:34.956682 waagent[1538]: 2024-02-09T09:57:34.956565Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 09:57:34.960532 waagent[1538]: 2024-02-09T09:57:34.960470Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 09:57:34.962162 waagent[1538]: 2024-02-09T09:57:34.962093Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 09:57:34.962469 waagent[1538]: 2024-02-09T09:57:34.962395Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:57:34.962662 waagent[1538]: 2024-02-09T09:57:34.962607Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:57:34.963644 waagent[1538]: 2024-02-09T09:57:34.963565Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 09:57:34.963974 waagent[1538]: 2024-02-09T09:57:34.963911Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 09:57:34.963974 waagent[1538]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 09:57:34.963974 waagent[1538]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 09:57:34.963974 waagent[1538]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 09:57:34.963974 waagent[1538]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:57:34.963974 waagent[1538]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:57:34.963974 waagent[1538]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 09:57:34.966151 waagent[1538]: 2024-02-09T09:57:34.966027Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 09:57:34.966643 waagent[1538]: 2024-02-09T09:57:34.966563Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 09:57:34.967007 waagent[1538]: 2024-02-09T09:57:34.966929Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 09:57:34.970195 waagent[1538]: 2024-02-09T09:57:34.970023Z INFO EnvHandler ExtHandler Configure routes Feb 9 09:57:34.970533 waagent[1538]: 2024-02-09T09:57:34.970464Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 09:57:34.970752 waagent[1538]: 2024-02-09T09:57:34.970682Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 09:57:34.971506 waagent[1538]: 2024-02-09T09:57:34.971424Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 09:57:34.972019 waagent[1538]: 2024-02-09T09:57:34.971950Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 09:57:34.972271 waagent[1538]: 2024-02-09T09:57:34.972191Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 09:57:34.977812 waagent[1538]: 2024-02-09T09:57:34.977751Z INFO EnvHandler ExtHandler Gateway:None Feb 9 09:57:34.978651 waagent[1538]: 2024-02-09T09:57:34.978592Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 09:57:34.978651 waagent[1538]: Executing ['ip', '-a', '-o', 'link']: Feb 9 09:57:34.978651 waagent[1538]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 09:57:34.978651 waagent[1538]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:a2:96 brd ff:ff:ff:ff:ff:ff Feb 9 09:57:34.978651 waagent[1538]: 3: enP4388s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c2:a2:96 brd ff:ff:ff:ff:ff:ff\ altname enP4388p0s2 Feb 9 09:57:34.978651 waagent[1538]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 09:57:34.978651 waagent[1538]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 09:57:34.978651 waagent[1538]: 2: eth0 inet 10.200.20.10/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 09:57:34.978651 waagent[1538]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 09:57:34.978651 waagent[1538]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 09:57:34.978651 waagent[1538]: 2: eth0 inet6 fe80::20d:3aff:fec2:a296/64 scope link \ valid_lft forever preferred_lft forever Feb 9 09:57:34.979333 waagent[1538]: 2024-02-09T09:57:34.979250Z INFO EnvHandler ExtHandler Routes:None Feb 9 09:57:34.991377 waagent[1538]: 2024-02-09T09:57:34.991237Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 09:57:34.991567 waagent[1538]: 2024-02-09T09:57:34.991503Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 09:57:35.029342 waagent[1538]: 2024-02-09T09:57:35.029265Z INFO ExtHandler ExtHandler Feb 9 09:57:35.030764 waagent[1538]: 2024-02-09T09:57:35.030696Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 15abbf08-b5d6-4a18-885b-544d4c36e09d correlation 70f720ea-36f0-4713-bbdd-3694ad69d42d created: 2024-02-09T09:56:33.371124Z] Feb 9 09:57:35.033966 waagent[1538]: 2024-02-09T09:57:35.033874Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 09:57:35.044852 waagent[1538]: 2024-02-09T09:57:35.044782Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 15 ms] Feb 9 09:57:35.067758 waagent[1538]: 2024-02-09T09:57:35.067681Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 09:57:35.067758 waagent[1538]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:57:35.067758 waagent[1538]: pkts bytes target prot opt in out source destination Feb 9 09:57:35.067758 waagent[1538]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:57:35.067758 waagent[1538]: pkts bytes target prot opt in out source destination Feb 9 09:57:35.067758 waagent[1538]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 09:57:35.067758 waagent[1538]: pkts bytes target prot opt in out source destination Feb 9 09:57:35.067758 waagent[1538]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 09:57:35.067758 waagent[1538]: 106 14605 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 09:57:35.067758 waagent[1538]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 09:57:35.068641 waagent[1538]: 2024-02-09T09:57:35.068594Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 09:57:35.082551 waagent[1538]: 2024-02-09T09:57:35.082476Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 09:57:35.095171 waagent[1538]: 2024-02-09T09:57:35.095099Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B5193FDE-CCFF-492B-AF11-C8277DA7D954;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 09:57:59.813282 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 09:58:00.664363 update_engine[1346]: I0209 09:58:00.664328 1346 update_attempter.cc:509] Updating boot flags... Feb 9 09:58:09.869592 systemd[1]: Created slice system-sshd.slice. Feb 9 09:58:09.870698 systemd[1]: Started sshd@0-10.200.20.10:22-10.200.12.6:52742.service. Feb 9 09:58:10.333756 sshd[1643]: Accepted publickey for core from 10.200.12.6 port 52742 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:10.339391 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:10.343344 systemd-logind[1344]: New session 3 of user core. Feb 9 09:58:10.344140 systemd[1]: Started session-3.scope. Feb 9 09:58:10.704394 systemd[1]: Started sshd@1-10.200.20.10:22-10.200.12.6:52754.service. Feb 9 09:58:11.126389 sshd[1648]: Accepted publickey for core from 10.200.12.6 port 52754 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:11.128054 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:11.131343 systemd-logind[1344]: New session 4 of user core. Feb 9 09:58:11.132153 systemd[1]: Started session-4.scope. Feb 9 09:58:11.435791 sshd[1648]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:11.438236 systemd[1]: sshd@1-10.200.20.10:22-10.200.12.6:52754.service: Deactivated successfully. Feb 9 09:58:11.438968 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:58:11.439525 systemd-logind[1344]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:58:11.440460 systemd-logind[1344]: Removed session 4. Feb 9 09:58:11.513607 systemd[1]: Started sshd@2-10.200.20.10:22-10.200.12.6:52760.service. Feb 9 09:58:11.939477 sshd[1654]: Accepted publickey for core from 10.200.12.6 port 52760 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:11.940702 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:11.944499 systemd-logind[1344]: New session 5 of user core. Feb 9 09:58:11.944886 systemd[1]: Started session-5.scope. Feb 9 09:58:12.248282 sshd[1654]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:12.250681 systemd[1]: sshd@2-10.200.20.10:22-10.200.12.6:52760.service: Deactivated successfully. Feb 9 09:58:12.251360 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:58:12.251892 systemd-logind[1344]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:58:12.252704 systemd-logind[1344]: Removed session 5. Feb 9 09:58:12.319091 systemd[1]: Started sshd@3-10.200.20.10:22-10.200.12.6:52762.service. Feb 9 09:58:12.739408 sshd[1663]: Accepted publickey for core from 10.200.12.6 port 52762 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:12.740659 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:12.744376 systemd-logind[1344]: New session 6 of user core. Feb 9 09:58:12.744873 systemd[1]: Started session-6.scope. Feb 9 09:58:13.047203 sshd[1663]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:13.049726 systemd[1]: sshd@3-10.200.20.10:22-10.200.12.6:52762.service: Deactivated successfully. Feb 9 09:58:13.050414 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:58:13.050947 systemd-logind[1344]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:58:13.051772 systemd-logind[1344]: Removed session 6. Feb 9 09:58:13.117073 systemd[1]: Started sshd@4-10.200.20.10:22-10.200.12.6:52768.service. Feb 9 09:58:13.533540 sshd[1669]: Accepted publickey for core from 10.200.12.6 port 52768 ssh2: RSA SHA256:hRMUPydBxGv30tVhZeI0xMvD4t7pzUsgMKn9+EnuIn0 Feb 9 09:58:13.534752 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:58:13.538552 systemd-logind[1344]: New session 7 of user core. Feb 9 09:58:13.538954 systemd[1]: Started session-7.scope. Feb 9 09:58:13.830342 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:58:13.830538 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:58:14.381780 systemd[1]: Reloading. Feb 9 09:58:14.454444 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2024-02-09T09:58:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:58:14.454474 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2024-02-09T09:58:14Z" level=info msg="torcx already run" Feb 9 09:58:14.542214 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:58:14.542531 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:58:14.558051 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:58:14.658437 systemd[1]: Started kubelet.service. Feb 9 09:58:14.675762 systemd[1]: Starting coreos-metadata.service... Feb 9 09:58:14.712765 coreos-metadata[1769]: Feb 09 09:58:14.712 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 09:58:14.714100 kubelet[1760]: E0209 09:58:14.714017 1760 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:58:14.715764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:58:14.715887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:58:14.716919 coreos-metadata[1769]: Feb 09 09:58:14.716 INFO Fetch successful Feb 9 09:58:14.716919 coreos-metadata[1769]: Feb 09 09:58:14.716 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 9 09:58:14.718318 coreos-metadata[1769]: Feb 09 09:58:14.718 INFO Fetch successful Feb 9 09:58:14.718752 coreos-metadata[1769]: Feb 09 09:58:14.718 INFO Fetching http://168.63.129.16/machine/dcdc6bab-2649-4a3e-b903-2f380024de01/234971f3%2D013f%2D4cbb%2D9a41%2D15f16670d2eb.%5Fci%2D3510.3.2%2Da%2Dac6bbec117?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 9 09:58:14.720449 coreos-metadata[1769]: Feb 09 09:58:14.720 INFO Fetch successful Feb 9 09:58:14.753727 coreos-metadata[1769]: Feb 09 09:58:14.753 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 9 09:58:14.767102 coreos-metadata[1769]: Feb 09 09:58:14.766 INFO Fetch successful Feb 9 09:58:14.775695 systemd[1]: Finished coreos-metadata.service. Feb 9 09:58:15.587154 systemd[1]: Stopped kubelet.service. Feb 9 09:58:15.605080 systemd[1]: Reloading. Feb 9 09:58:15.668993 /usr/lib/systemd/system-generators/torcx-generator[1828]: time="2024-02-09T09:58:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:58:15.669030 /usr/lib/systemd/system-generators/torcx-generator[1828]: time="2024-02-09T09:58:15Z" level=info msg="torcx already run" Feb 9 09:58:15.745347 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:58:15.745367 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:58:15.760962 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:58:15.847439 systemd[1]: Started kubelet.service. Feb 9 09:58:15.891562 kubelet[1885]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:58:15.891562 kubelet[1885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:58:15.891919 kubelet[1885]: I0209 09:58:15.891612 1885 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:58:15.892888 kubelet[1885]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:58:15.892888 kubelet[1885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:58:17.358824 kubelet[1885]: I0209 09:58:17.358786 1885 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:58:17.358824 kubelet[1885]: I0209 09:58:17.358816 1885 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:58:17.359161 kubelet[1885]: I0209 09:58:17.359028 1885 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:58:17.362555 kubelet[1885]: I0209 09:58:17.362523 1885 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:58:17.363538 kubelet[1885]: W0209 09:58:17.363510 1885 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:58:17.364445 kubelet[1885]: I0209 09:58:17.364420 1885 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:58:17.364665 kubelet[1885]: I0209 09:58:17.364643 1885 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:58:17.364741 kubelet[1885]: I0209 09:58:17.364725 1885 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:58:17.364833 kubelet[1885]: I0209 09:58:17.364747 1885 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:58:17.364833 kubelet[1885]: I0209 09:58:17.364758 1885 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:58:17.364886 kubelet[1885]: I0209 09:58:17.364863 1885 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:58:17.370079 kubelet[1885]: I0209 09:58:17.370062 1885 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:58:17.370197 kubelet[1885]: I0209 09:58:17.370187 1885 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:58:17.370272 kubelet[1885]: I0209 09:58:17.370263 1885 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:58:17.370351 kubelet[1885]: I0209 09:58:17.370341 1885 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:58:17.370914 kubelet[1885]: E0209 09:58:17.370899 1885 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:17.371034 kubelet[1885]: E0209 09:58:17.371024 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:17.372471 kubelet[1885]: I0209 09:58:17.372447 1885 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:58:17.372740 kubelet[1885]: W0209 09:58:17.372710 1885 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:58:17.373414 kubelet[1885]: I0209 09:58:17.373392 1885 server.go:1186] "Started kubelet" Feb 9 09:58:17.373475 kubelet[1885]: I0209 09:58:17.373457 1885 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:58:17.374023 kubelet[1885]: I0209 09:58:17.374005 1885 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:58:17.379686 kubelet[1885]: W0209 09:58:17.379653 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:17.379829 kubelet[1885]: E0209 09:58:17.379817 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:17.380051 kubelet[1885]: E0209 09:58:17.379931 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f269e0470", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 373361264, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 373361264, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.380336 kubelet[1885]: W0209 09:58:17.380321 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:17.380431 kubelet[1885]: E0209 09:58:17.380421 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:17.383954 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:58:17.384473 kubelet[1885]: I0209 09:58:17.384448 1885 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:58:17.385035 kubelet[1885]: E0209 09:58:17.385017 1885 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:58:17.385125 kubelet[1885]: E0209 09:58:17.385116 1885 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:58:17.386587 kubelet[1885]: E0209 09:58:17.386505 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f27513420", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 385104416, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 385104416, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.388257 kubelet[1885]: E0209 09:58:17.388243 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:17.388413 kubelet[1885]: I0209 09:58:17.388389 1885 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:58:17.388551 kubelet[1885]: I0209 09:58:17.388540 1885 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:58:17.414791 kubelet[1885]: E0209 09:58:17.414760 1885 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.20.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:58:17.414937 kubelet[1885]: W0209 09:58:17.414806 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:17.414937 kubelet[1885]: E0209 09:58:17.414823 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:17.419398 kubelet[1885]: I0209 09:58:17.419377 1885 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:58:17.419592 kubelet[1885]: I0209 09:58:17.419582 1885 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:58:17.419681 kubelet[1885]: E0209 09:58:17.419373 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ec78f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.419785 kubelet[1885]: I0209 09:58:17.419671 1885 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:58:17.420360 kubelet[1885]: E0209 09:58:17.420274 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294edc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.420880 kubelet[1885]: E0209 09:58:17.420828 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ee8ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.424839 kubelet[1885]: I0209 09:58:17.424807 1885 policy_none.go:49] "None policy: Start" Feb 9 09:58:17.425660 kubelet[1885]: I0209 09:58:17.425642 1885 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:58:17.425769 kubelet[1885]: I0209 09:58:17.425760 1885 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:58:17.433322 systemd[1]: Created slice kubepods.slice. Feb 9 09:58:17.438100 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:58:17.441520 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:58:17.468366 kubelet[1885]: I0209 09:58:17.468318 1885 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:58:17.469227 kubelet[1885]: I0209 09:58:17.469197 1885 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:58:17.469507 kubelet[1885]: E0209 09:58:17.469493 1885 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.10\" not found" Feb 9 09:58:17.471578 kubelet[1885]: E0209 09:58:17.471487 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f2c680b58", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 470487384, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 470487384, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.489524 kubelet[1885]: I0209 09:58:17.489487 1885 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.10" Feb 9 09:58:17.490365 kubelet[1885]: E0209 09:58:17.490346 1885 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.10" Feb 9 09:58:17.490787 kubelet[1885]: E0209 09:58:17.490719 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ec78f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 489429901, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ec78f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.491569 kubelet[1885]: E0209 09:58:17.491516 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294edc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 489442101, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294edc2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.492493 kubelet[1885]: E0209 09:58:17.492428 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ee8ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 489447701, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ee8ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.505947 kubelet[1885]: I0209 09:58:17.505924 1885 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:58:17.530439 kubelet[1885]: I0209 09:58:17.530415 1885 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:58:17.530608 kubelet[1885]: I0209 09:58:17.530597 1885 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:58:17.530674 kubelet[1885]: I0209 09:58:17.530664 1885 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:58:17.530762 kubelet[1885]: E0209 09:58:17.530754 1885 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:58:17.532009 kubelet[1885]: W0209 09:58:17.531990 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:58:17.532146 kubelet[1885]: E0209 09:58:17.532135 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:58:17.616762 kubelet[1885]: E0209 09:58:17.616659 1885 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.20.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:58:17.691724 kubelet[1885]: I0209 09:58:17.691691 1885 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.10" Feb 9 09:58:17.692748 kubelet[1885]: E0209 09:58:17.692669 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ec78f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 691653631, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ec78f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.692966 kubelet[1885]: E0209 09:58:17.692944 1885 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.10" Feb 9 09:58:17.693446 kubelet[1885]: E0209 09:58:17.693384 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294edc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 691658911, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294edc2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:17.778511 kubelet[1885]: E0209 09:58:17.778439 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ee8ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 691664991, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ee8ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:18.018581 kubelet[1885]: E0209 09:58:18.018547 1885 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.20.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:58:18.094582 kubelet[1885]: I0209 09:58:18.094555 1885 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.10" Feb 9 09:58:18.095653 kubelet[1885]: E0209 09:58:18.095619 1885 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.10" Feb 9 09:58:18.095856 kubelet[1885]: E0209 09:58:18.095781 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ec78f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 18, 94524592, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ec78f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:18.179151 kubelet[1885]: E0209 09:58:18.179081 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294edc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 18, 94529152, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294edc2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:18.200485 kubelet[1885]: W0209 09:58:18.200457 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:18.200485 kubelet[1885]: E0209 09:58:18.200488 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:18.372198 kubelet[1885]: E0209 09:58:18.372098 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:18.379158 kubelet[1885]: E0209 09:58:18.379079 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ee8ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 18, 94532073, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ee8ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:18.518632 kubelet[1885]: W0209 09:58:18.518605 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:18.518798 kubelet[1885]: E0209 09:58:18.518788 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:18.820498 kubelet[1885]: E0209 09:58:18.820469 1885 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.20.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:58:18.894997 kubelet[1885]: W0209 09:58:18.894970 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:18.895152 kubelet[1885]: E0209 09:58:18.895142 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:18.896960 kubelet[1885]: I0209 09:58:18.896938 1885 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.10" Feb 9 09:58:18.897768 kubelet[1885]: E0209 09:58:18.897752 1885 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.10" Feb 9 09:58:18.897983 kubelet[1885]: E0209 09:58:18.897897 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ec78f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 18, 896906882, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ec78f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:18.898843 kubelet[1885]: E0209 09:58:18.898789 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294edc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 18, 896911643, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294edc2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:18.979132 kubelet[1885]: E0209 09:58:18.979034 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ee8ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 18, 896915043, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ee8ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:19.045398 kubelet[1885]: W0209 09:58:19.045370 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:58:19.045500 kubelet[1885]: E0209 09:58:19.045413 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:58:19.373601 kubelet[1885]: E0209 09:58:19.373573 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:19.832669 kubelet[1885]: W0209 09:58:19.832644 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:19.832813 kubelet[1885]: E0209 09:58:19.832803 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:20.268458 kubelet[1885]: W0209 09:58:20.268433 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:20.268623 kubelet[1885]: E0209 09:58:20.268613 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:20.374401 kubelet[1885]: E0209 09:58:20.374374 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:20.422184 kubelet[1885]: E0209 09:58:20.422133 1885 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.20.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:58:20.499208 kubelet[1885]: I0209 09:58:20.499189 1885 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.10" Feb 9 09:58:20.500653 kubelet[1885]: E0209 09:58:20.500628 1885 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.10" Feb 9 09:58:20.500743 kubelet[1885]: E0209 09:58:20.500672 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ec78f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 20, 499146511, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ec78f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:20.501587 kubelet[1885]: E0209 09:58:20.501530 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294edc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 20, 499159753, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294edc2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:20.502370 kubelet[1885]: E0209 09:58:20.502313 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ee8ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 20, 499163593, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ee8ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:21.118515 kubelet[1885]: W0209 09:58:21.118479 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:21.118515 kubelet[1885]: E0209 09:58:21.118514 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:21.269781 kubelet[1885]: W0209 09:58:21.269755 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:58:21.269781 kubelet[1885]: E0209 09:58:21.269779 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:58:21.375449 kubelet[1885]: E0209 09:58:21.375363 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:22.376165 kubelet[1885]: E0209 09:58:22.376125 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:23.377256 kubelet[1885]: E0209 09:58:23.377218 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:23.624217 kubelet[1885]: E0209 09:58:23.624175 1885 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.20.10" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 09:58:23.702063 kubelet[1885]: I0209 09:58:23.702035 1885 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.10" Feb 9 09:58:23.703317 kubelet[1885]: E0209 09:58:23.703281 1885 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.10" Feb 9 09:58:23.703465 kubelet[1885]: E0209 09:58:23.703392 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ec78f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418499983, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 23, 702001906, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ec78f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:23.704163 kubelet[1885]: E0209 09:58:23.704105 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294edc2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418505263, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 23, 702007506, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294edc2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:23.704806 kubelet[1885]: E0209 09:58:23.704753 1885 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.10.17b2295f294ee8ff", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.10", UID:"10.200.20.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 58, 17, 418508543, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 58, 23, 702010627, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.10.17b2295f294ee8ff" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:58:24.007155 kubelet[1885]: W0209 09:58:24.007067 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:24.007155 kubelet[1885]: E0209 09:58:24.007100 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:58:24.377769 kubelet[1885]: E0209 09:58:24.377661 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:24.790920 kubelet[1885]: W0209 09:58:24.790882 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:24.790920 kubelet[1885]: E0209 09:58:24.790920 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:58:25.378355 kubelet[1885]: E0209 09:58:25.378323 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:25.498118 kubelet[1885]: W0209 09:58:25.498084 1885 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:25.498118 kubelet[1885]: E0209 09:58:25.498119 1885 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:58:26.379669 kubelet[1885]: E0209 09:58:26.379638 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:27.362046 kubelet[1885]: I0209 09:58:27.362013 1885 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 09:58:27.380537 kubelet[1885]: E0209 09:58:27.380512 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:27.470279 kubelet[1885]: E0209 09:58:27.470255 1885 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.10\" not found" Feb 9 09:58:27.745534 kubelet[1885]: E0209 09:58:27.745482 1885 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.10" not found Feb 9 09:58:28.381271 kubelet[1885]: E0209 09:58:28.381243 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:28.983094 kubelet[1885]: E0209 09:58:28.983053 1885 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.10" not found Feb 9 09:58:29.382388 kubelet[1885]: E0209 09:58:29.382286 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:30.028596 kubelet[1885]: E0209 09:58:30.028545 1885 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.10\" not found" node="10.200.20.10" Feb 9 09:58:30.104604 kubelet[1885]: I0209 09:58:30.104586 1885 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.10" Feb 9 09:58:30.383458 kubelet[1885]: E0209 09:58:30.383356 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:30.384284 kubelet[1885]: I0209 09:58:30.384256 1885 kubelet_node_status.go:73] "Successfully registered node" node="10.200.20.10" Feb 9 09:58:30.402235 kubelet[1885]: E0209 09:58:30.402204 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:30.502864 kubelet[1885]: E0209 09:58:30.502839 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:30.603656 kubelet[1885]: E0209 09:58:30.603622 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:30.704083 kubelet[1885]: E0209 09:58:30.704052 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:30.804557 kubelet[1885]: E0209 09:58:30.804528 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:30.859958 sudo[1672]: pam_unix(sudo:session): session closed for user root Feb 9 09:58:30.905309 kubelet[1885]: E0209 09:58:30.905271 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.006696 kubelet[1885]: E0209 09:58:31.006357 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.107771 kubelet[1885]: E0209 09:58:31.107744 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.146499 sshd[1669]: pam_unix(sshd:session): session closed for user core Feb 9 09:58:31.148552 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:58:31.149381 systemd[1]: sshd@4-10.200.20.10:22-10.200.12.6:52768.service: Deactivated successfully. Feb 9 09:58:31.150116 systemd-logind[1344]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:58:31.150838 systemd-logind[1344]: Removed session 7. Feb 9 09:58:31.208235 kubelet[1885]: E0209 09:58:31.208201 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.309053 kubelet[1885]: E0209 09:58:31.308637 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.384203 kubelet[1885]: E0209 09:58:31.384178 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:31.409461 kubelet[1885]: E0209 09:58:31.409441 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.509960 kubelet[1885]: E0209 09:58:31.509939 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.610577 kubelet[1885]: E0209 09:58:31.610220 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.710742 kubelet[1885]: E0209 09:58:31.710717 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.811184 kubelet[1885]: E0209 09:58:31.811162 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:31.911809 kubelet[1885]: E0209 09:58:31.911784 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:32.012261 kubelet[1885]: E0209 09:58:32.012238 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:32.112695 kubelet[1885]: E0209 09:58:32.112678 1885 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.10\" not found" Feb 9 09:58:32.213759 kubelet[1885]: I0209 09:58:32.213449 1885 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 09:58:32.214179 env[1360]: time="2024-02-09T09:58:32.214146709Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:58:32.214704 kubelet[1885]: I0209 09:58:32.214690 1885 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 09:58:32.380930 kubelet[1885]: I0209 09:58:32.380905 1885 apiserver.go:52] "Watching apiserver" Feb 9 09:58:32.383202 kubelet[1885]: I0209 09:58:32.383177 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:32.383414 kubelet[1885]: I0209 09:58:32.383401 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:58:32.385499 kubelet[1885]: E0209 09:58:32.385481 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:32.387811 systemd[1]: Created slice kubepods-besteffort-pod3de9792e_c87c_4bdf_b1aa_24fc9e9a93d3.slice. Feb 9 09:58:32.390428 kubelet[1885]: I0209 09:58:32.390410 1885 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:58:32.402756 systemd[1]: Created slice kubepods-burstable-pod2326389e_238d_4bdf_a44b_64b797fc254e.slice. Feb 9 09:58:32.450270 kubelet[1885]: I0209 09:58:32.450244 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3-kube-proxy\") pod \"kube-proxy-6cmdd\" (UID: \"3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3\") " pod="kube-system/kube-proxy-6cmdd" Feb 9 09:58:32.450479 kubelet[1885]: I0209 09:58:32.450467 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-run\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.450584 kubelet[1885]: I0209 09:58:32.450574 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-hostproc\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.450679 kubelet[1885]: I0209 09:58:32.450670 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-lib-modules\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.450783 kubelet[1885]: I0209 09:58:32.450774 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-net\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.450879 kubelet[1885]: I0209 09:58:32.450870 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cni-path\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.450985 kubelet[1885]: I0209 09:58:32.450976 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-xtables-lock\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451082 kubelet[1885]: I0209 09:58:32.451073 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-config-path\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451194 kubelet[1885]: I0209 09:58:32.451172 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-kernel\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451247 kubelet[1885]: I0209 09:58:32.451213 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-hubble-tls\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451276 kubelet[1885]: I0209 09:58:32.451266 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4d7\" (UniqueName: \"kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-kube-api-access-jx4d7\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451321 kubelet[1885]: I0209 09:58:32.451304 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3-xtables-lock\") pod \"kube-proxy-6cmdd\" (UID: \"3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3\") " pod="kube-system/kube-proxy-6cmdd" Feb 9 09:58:32.451356 kubelet[1885]: I0209 09:58:32.451337 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3-lib-modules\") pod \"kube-proxy-6cmdd\" (UID: \"3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3\") " pod="kube-system/kube-proxy-6cmdd" Feb 9 09:58:32.451382 kubelet[1885]: I0209 09:58:32.451365 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zns49\" (UniqueName: \"kubernetes.io/projected/3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3-kube-api-access-zns49\") pod \"kube-proxy-6cmdd\" (UID: \"3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3\") " pod="kube-system/kube-proxy-6cmdd" Feb 9 09:58:32.451411 kubelet[1885]: I0209 09:58:32.451389 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-etc-cni-netd\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451411 kubelet[1885]: I0209 09:58:32.451409 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-bpf-maps\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451460 kubelet[1885]: I0209 09:58:32.451442 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-cgroup\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451485 kubelet[1885]: I0209 09:58:32.451463 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2326389e-238d-4bdf-a44b-64b797fc254e-clustermesh-secrets\") pod \"cilium-khpzj\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " pod="kube-system/cilium-khpzj" Feb 9 09:58:32.451485 kubelet[1885]: I0209 09:58:32.451477 1885 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:58:33.314995 env[1360]: time="2024-02-09T09:58:33.314948375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khpzj,Uid:2326389e-238d-4bdf-a44b-64b797fc254e,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:33.390713 kubelet[1885]: E0209 09:58:33.390684 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:33.601517 env[1360]: time="2024-02-09T09:58:33.601414835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6cmdd,Uid:3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3,Namespace:kube-system,Attempt:0,}" Feb 9 09:58:34.155184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078773474.mount: Deactivated successfully. Feb 9 09:58:34.179044 env[1360]: time="2024-02-09T09:58:34.178997036Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.183888 env[1360]: time="2024-02-09T09:58:34.183841203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.193704 env[1360]: time="2024-02-09T09:58:34.193664505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.198230 env[1360]: time="2024-02-09T09:58:34.198184453Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.201656 env[1360]: time="2024-02-09T09:58:34.201624777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.205546 env[1360]: time="2024-02-09T09:58:34.205508647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.215500 env[1360]: time="2024-02-09T09:58:34.215467357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.262251 env[1360]: time="2024-02-09T09:58:34.262200045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:34.391116 kubelet[1885]: E0209 09:58:34.391078 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:34.717502 env[1360]: time="2024-02-09T09:58:34.717439369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:34.717860 env[1360]: time="2024-02-09T09:58:34.717835592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:34.717962 env[1360]: time="2024-02-09T09:58:34.717940278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:34.719281 env[1360]: time="2024-02-09T09:58:34.718215774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fffbb9fbaa2b3f9d4722f02cbd1fee5c0436afb8b5ca0cb6124cb084c9697ea pid=1969 runtime=io.containerd.runc.v2 Feb 9 09:58:34.735566 systemd[1]: Started cri-containerd-6fffbb9fbaa2b3f9d4722f02cbd1fee5c0436afb8b5ca0cb6124cb084c9697ea.scope. Feb 9 09:58:34.761336 env[1360]: time="2024-02-09T09:58:34.761271805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6cmdd,Uid:3de9792e-c87c-4bdf-b1aa-24fc9e9a93d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fffbb9fbaa2b3f9d4722f02cbd1fee5c0436afb8b5ca0cb6124cb084c9697ea\"" Feb 9 09:58:34.764119 env[1360]: time="2024-02-09T09:58:34.764079291Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:58:34.776722 env[1360]: time="2024-02-09T09:58:34.776636595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:58:34.776722 env[1360]: time="2024-02-09T09:58:34.776680437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:58:34.776722 env[1360]: time="2024-02-09T09:58:34.776691358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:58:34.777356 env[1360]: time="2024-02-09T09:58:34.777130264Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b pid=2011 runtime=io.containerd.runc.v2 Feb 9 09:58:34.792834 systemd[1]: Started cri-containerd-a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b.scope. Feb 9 09:58:34.817367 env[1360]: time="2024-02-09T09:58:34.817327165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khpzj,Uid:2326389e-238d-4bdf-a44b-64b797fc254e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\"" Feb 9 09:58:35.148693 systemd[1]: run-containerd-runc-k8s.io-6fffbb9fbaa2b3f9d4722f02cbd1fee5c0436afb8b5ca0cb6124cb084c9697ea-runc.YesZnO.mount: Deactivated successfully. Feb 9 09:58:35.391586 kubelet[1885]: E0209 09:58:35.391537 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:36.392192 kubelet[1885]: E0209 09:58:36.392160 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:36.751371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511584206.mount: Deactivated successfully. Feb 9 09:58:37.150439 env[1360]: time="2024-02-09T09:58:37.150395703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:37.157559 env[1360]: time="2024-02-09T09:58:37.157522734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:37.162127 env[1360]: time="2024-02-09T09:58:37.162098744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:37.165365 env[1360]: time="2024-02-09T09:58:37.165339082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:37.165793 env[1360]: time="2024-02-09T09:58:37.165767185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 09:58:37.166960 env[1360]: time="2024-02-09T09:58:37.166926529Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:58:37.168149 env[1360]: time="2024-02-09T09:58:37.168110714Z" level=info msg="CreateContainer within sandbox \"6fffbb9fbaa2b3f9d4722f02cbd1fee5c0436afb8b5ca0cb6124cb084c9697ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:58:37.199857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4077284960.mount: Deactivated successfully. Feb 9 09:58:37.267042 env[1360]: time="2024-02-09T09:58:37.266984529Z" level=info msg="CreateContainer within sandbox \"6fffbb9fbaa2b3f9d4722f02cbd1fee5c0436afb8b5ca0cb6124cb084c9697ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ed1ef2a2a77050e2d224175a10fc90d4d5a3b9b312687a719d40d814c0530f3\"" Feb 9 09:58:37.267880 env[1360]: time="2024-02-09T09:58:37.267848817Z" level=info msg="StartContainer for \"9ed1ef2a2a77050e2d224175a10fc90d4d5a3b9b312687a719d40d814c0530f3\"" Feb 9 09:58:37.282971 systemd[1]: Started cri-containerd-9ed1ef2a2a77050e2d224175a10fc90d4d5a3b9b312687a719d40d814c0530f3.scope. Feb 9 09:58:37.318038 env[1360]: time="2024-02-09T09:58:37.317998524Z" level=info msg="StartContainer for \"9ed1ef2a2a77050e2d224175a10fc90d4d5a3b9b312687a719d40d814c0530f3\" returns successfully" Feb 9 09:58:37.370795 kubelet[1885]: E0209 09:58:37.370756 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:37.394367 kubelet[1885]: E0209 09:58:37.394342 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:37.469457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904959684.mount: Deactivated successfully. Feb 9 09:58:37.564496 kubelet[1885]: I0209 09:58:37.564467 1885 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6cmdd" podStartSLOduration=-9.223372029290354e+09 pod.CreationTimestamp="2024-02-09 09:58:30 +0000 UTC" firstStartedPulling="2024-02-09 09:58:34.763138355 +0000 UTC m=+18.911879481" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:58:37.563814788 +0000 UTC m=+21.712555914" watchObservedRunningTime="2024-02-09 09:58:37.564421261 +0000 UTC m=+21.713162387" Feb 9 09:58:38.395091 kubelet[1885]: E0209 09:58:38.395054 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:39.395970 kubelet[1885]: E0209 09:58:39.395911 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:40.396618 kubelet[1885]: E0209 09:58:40.396594 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:41.397686 kubelet[1885]: E0209 09:58:41.397651 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:42.398582 kubelet[1885]: E0209 09:58:42.398548 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:43.399610 kubelet[1885]: E0209 09:58:43.399577 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:44.399699 kubelet[1885]: E0209 09:58:44.399667 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:45.004315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475798657.mount: Deactivated successfully. Feb 9 09:58:45.400320 kubelet[1885]: E0209 09:58:45.400260 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:46.401384 kubelet[1885]: E0209 09:58:46.401347 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:47.401554 kubelet[1885]: E0209 09:58:47.401497 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:48.402372 kubelet[1885]: E0209 09:58:48.402336 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:50.611082 kubelet[1885]: E0209 09:58:49.402772 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:50.611082 kubelet[1885]: E0209 09:58:50.403285 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:50.770192 env[1360]: time="2024-02-09T09:58:50.770119769Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:50.776462 env[1360]: time="2024-02-09T09:58:50.776421699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:50.780284 env[1360]: time="2024-02-09T09:58:50.780250771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:58:50.780776 env[1360]: time="2024-02-09T09:58:50.780746551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:58:50.782948 env[1360]: time="2024-02-09T09:58:50.782919117Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:58:50.833741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619996306.mount: Deactivated successfully. Feb 9 09:58:50.837838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994676047.mount: Deactivated successfully. Feb 9 09:58:50.853190 env[1360]: time="2024-02-09T09:58:50.853144065Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\"" Feb 9 09:58:50.854118 env[1360]: time="2024-02-09T09:58:50.854093862Z" level=info msg="StartContainer for \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\"" Feb 9 09:58:50.870499 systemd[1]: Started cri-containerd-64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e.scope. Feb 9 09:58:50.902545 env[1360]: time="2024-02-09T09:58:50.902497944Z" level=info msg="StartContainer for \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\" returns successfully" Feb 9 09:58:50.907377 systemd[1]: cri-containerd-64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e.scope: Deactivated successfully. Feb 9 09:58:52.160121 kubelet[1885]: E0209 09:58:51.403571 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:51.831609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e-rootfs.mount: Deactivated successfully. Feb 9 09:58:52.404517 kubelet[1885]: E0209 09:58:52.404485 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:53.405256 kubelet[1885]: E0209 09:58:53.405228 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:54.406466 kubelet[1885]: E0209 09:58:54.406439 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:55.407891 kubelet[1885]: E0209 09:58:55.407863 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:56.408396 kubelet[1885]: E0209 09:58:56.408357 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:57.370887 kubelet[1885]: E0209 09:58:57.370861 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:57.409286 kubelet[1885]: E0209 09:58:57.409261 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:58.409989 kubelet[1885]: E0209 09:58:58.409958 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:58:59.410882 kubelet[1885]: E0209 09:58:59.410856 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:00.412206 kubelet[1885]: E0209 09:59:00.412157 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:00.910250 env[1360]: time="2024-02-09T09:59:00.910207827Z" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e,ID:64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e,Pid:2234,ExitStatus:0,ExitedAt:2024-02-09 09:58:50.909407258 +0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Feb 9 09:59:01.412587 kubelet[1885]: E0209 09:59:01.412565 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:02.414005 kubelet[1885]: E0209 09:59:02.413972 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:02.509995 env[1360]: time="2024-02-09T09:59:02.509956312Z" level=info msg="TaskExit event &TaskExit{ContainerID:64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e,ID:64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e,Pid:2234,ExitStatus:0,ExitedAt:2024-02-09 09:58:50.909407258 +0000 UTC,XXX_unrecognized:[],}" Feb 9 09:59:02.590410 env[1360]: time="2024-02-09T09:59:02.590372513Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:59:02.618099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447479723.mount: Deactivated successfully. Feb 9 09:59:02.633697 env[1360]: time="2024-02-09T09:59:02.633650826Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\"" Feb 9 09:59:02.634600 env[1360]: time="2024-02-09T09:59:02.634576254Z" level=info msg="StartContainer for \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\"" Feb 9 09:59:02.652186 systemd[1]: Started cri-containerd-5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028.scope. Feb 9 09:59:02.683958 env[1360]: time="2024-02-09T09:59:02.683850670Z" level=info msg="StartContainer for \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\" returns successfully" Feb 9 09:59:02.691216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:59:02.691423 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:59:02.692085 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:59:02.693814 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:59:02.698766 systemd[1]: cri-containerd-5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028.scope: Deactivated successfully. Feb 9 09:59:02.703871 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:59:02.735929 env[1360]: time="2024-02-09T09:59:02.735876889Z" level=info msg="shim disconnected" id=5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028 Feb 9 09:59:02.735929 env[1360]: time="2024-02-09T09:59:02.735925730Z" level=warning msg="cleaning up after shim disconnected" id=5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028 namespace=k8s.io Feb 9 09:59:02.736146 env[1360]: time="2024-02-09T09:59:02.735938050Z" level=info msg="cleaning up dead shim" Feb 9 09:59:02.742368 env[1360]: time="2024-02-09T09:59:02.742315084Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2318 runtime=io.containerd.runc.v2\n" Feb 9 09:59:03.414780 kubelet[1885]: E0209 09:59:03.414750 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:03.592775 env[1360]: time="2024-02-09T09:59:03.592728570Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:59:03.615533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028-rootfs.mount: Deactivated successfully. Feb 9 09:59:03.619977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715299482.mount: Deactivated successfully. Feb 9 09:59:03.622892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752221742.mount: Deactivated successfully. Feb 9 09:59:03.635479 env[1360]: time="2024-02-09T09:59:03.635432639Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\"" Feb 9 09:59:03.636403 env[1360]: time="2024-02-09T09:59:03.636287864Z" level=info msg="StartContainer for \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\"" Feb 9 09:59:03.651552 systemd[1]: Started cri-containerd-e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe.scope. Feb 9 09:59:03.679820 systemd[1]: cri-containerd-e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe.scope: Deactivated successfully. Feb 9 09:59:03.686950 env[1360]: time="2024-02-09T09:59:03.686906208Z" level=info msg="StartContainer for \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\" returns successfully" Feb 9 09:59:03.714934 env[1360]: time="2024-02-09T09:59:03.714878880Z" level=info msg="shim disconnected" id=e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe Feb 9 09:59:03.714934 env[1360]: time="2024-02-09T09:59:03.714931321Z" level=warning msg="cleaning up after shim disconnected" id=e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe namespace=k8s.io Feb 9 09:59:03.714934 env[1360]: time="2024-02-09T09:59:03.714941882Z" level=info msg="cleaning up dead shim" Feb 9 09:59:03.721873 env[1360]: time="2024-02-09T09:59:03.721828606Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2377 runtime=io.containerd.runc.v2\n" Feb 9 09:59:04.415058 kubelet[1885]: E0209 09:59:04.414989 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:04.595716 env[1360]: time="2024-02-09T09:59:04.595669456Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:59:04.620982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262447941.mount: Deactivated successfully. Feb 9 09:59:04.626073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053439190.mount: Deactivated successfully. Feb 9 09:59:04.639965 env[1360]: time="2024-02-09T09:59:04.639893743Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\"" Feb 9 09:59:04.640471 env[1360]: time="2024-02-09T09:59:04.640445719Z" level=info msg="StartContainer for \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\"" Feb 9 09:59:04.655362 systemd[1]: Started cri-containerd-7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281.scope. Feb 9 09:59:04.681188 systemd[1]: cri-containerd-7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281.scope: Deactivated successfully. Feb 9 09:59:04.688501 env[1360]: time="2024-02-09T09:59:04.688456317Z" level=info msg="StartContainer for \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\" returns successfully" Feb 9 09:59:04.716679 env[1360]: time="2024-02-09T09:59:04.716634137Z" level=info msg="shim disconnected" id=7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281 Feb 9 09:59:04.716896 env[1360]: time="2024-02-09T09:59:04.716878704Z" level=warning msg="cleaning up after shim disconnected" id=7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281 namespace=k8s.io Feb 9 09:59:04.716955 env[1360]: time="2024-02-09T09:59:04.716943066Z" level=info msg="cleaning up dead shim" Feb 9 09:59:04.724919 env[1360]: time="2024-02-09T09:59:04.724880457Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2434 runtime=io.containerd.runc.v2\n" Feb 9 09:59:05.415920 kubelet[1885]: E0209 09:59:05.415889 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:05.599173 env[1360]: time="2024-02-09T09:59:05.599118472Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:59:05.626640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195011864.mount: Deactivated successfully. Feb 9 09:59:05.630907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032655563.mount: Deactivated successfully. Feb 9 09:59:05.657883 env[1360]: time="2024-02-09T09:59:05.657826186Z" level=info msg="CreateContainer within sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\"" Feb 9 09:59:05.658350 env[1360]: time="2024-02-09T09:59:05.658323080Z" level=info msg="StartContainer for \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\"" Feb 9 09:59:05.673486 systemd[1]: Started cri-containerd-3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b.scope. Feb 9 09:59:05.714839 env[1360]: time="2024-02-09T09:59:05.714789530Z" level=info msg="StartContainer for \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\" returns successfully" Feb 9 09:59:05.797346 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:59:05.817020 kubelet[1885]: I0209 09:59:05.816103 1885 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:59:06.080329 kernel: Initializing XFRM netlink socket Feb 9 09:59:06.092330 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:59:06.416920 kubelet[1885]: E0209 09:59:06.416890 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:06.617316 kubelet[1885]: I0209 09:59:06.617027 1885 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-khpzj" podStartSLOduration=-9.223372000237787e+09 pod.CreationTimestamp="2024-02-09 09:58:30 +0000 UTC" firstStartedPulling="2024-02-09 09:58:34.824253455 +0000 UTC m=+18.972994581" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:06.61427967 +0000 UTC m=+50.763020796" watchObservedRunningTime="2024-02-09 09:59:06.616988746 +0000 UTC m=+50.765729872" Feb 9 09:59:07.417783 kubelet[1885]: E0209 09:59:07.417721 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:07.734357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:59:07.733733 systemd-networkd[1489]: cilium_host: Link UP Feb 9 09:59:07.733838 systemd-networkd[1489]: cilium_net: Link UP Feb 9 09:59:07.733840 systemd-networkd[1489]: cilium_net: Gained carrier Feb 9 09:59:07.733956 systemd-networkd[1489]: cilium_host: Gained carrier Feb 9 09:59:07.734133 systemd-networkd[1489]: cilium_host: Gained IPv6LL Feb 9 09:59:07.748471 systemd-networkd[1489]: cilium_net: Gained IPv6LL Feb 9 09:59:07.846270 systemd-networkd[1489]: cilium_vxlan: Link UP Feb 9 09:59:07.846278 systemd-networkd[1489]: cilium_vxlan: Gained carrier Feb 9 09:59:08.057324 kernel: NET: Registered PF_ALG protocol family Feb 9 09:59:08.418069 kubelet[1885]: E0209 09:59:08.418022 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:08.809379 kubelet[1885]: I0209 09:59:08.809249 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:08.813765 systemd[1]: Created slice kubepods-besteffort-pod0d06231d_abe0_4518_96e1_080ce3c0219a.slice. Feb 9 09:59:08.951059 kubelet[1885]: I0209 09:59:08.951026 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lngh\" (UniqueName: \"kubernetes.io/projected/0d06231d-abe0-4518-96e1-080ce3c0219a-kube-api-access-2lngh\") pod \"nginx-deployment-8ffc5cf85-mrhdv\" (UID: \"0d06231d-abe0-4518-96e1-080ce3c0219a\") " pod="default/nginx-deployment-8ffc5cf85-mrhdv" Feb 9 09:59:09.003409 systemd-networkd[1489]: lxc_health: Link UP Feb 9 09:59:09.020490 systemd-networkd[1489]: lxc_health: Gained carrier Feb 9 09:59:09.021449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:59:09.116508 env[1360]: time="2024-02-09T09:59:09.116390569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-mrhdv,Uid:0d06231d-abe0-4518-96e1-080ce3c0219a,Namespace:default,Attempt:0,}" Feb 9 09:59:09.310892 systemd-networkd[1489]: lxcbd326532477c: Link UP Feb 9 09:59:09.324323 kernel: eth0: renamed from tmp477d0 Feb 9 09:59:09.340314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbd326532477c: link becomes ready Feb 9 09:59:09.341501 systemd-networkd[1489]: lxcbd326532477c: Gained carrier Feb 9 09:59:09.418880 kubelet[1885]: E0209 09:59:09.418847 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:09.592511 systemd-networkd[1489]: cilium_vxlan: Gained IPv6LL Feb 9 09:59:10.420228 kubelet[1885]: E0209 09:59:10.420178 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:11.000433 systemd-networkd[1489]: lxc_health: Gained IPv6LL Feb 9 09:59:11.192408 systemd-networkd[1489]: lxcbd326532477c: Gained IPv6LL Feb 9 09:59:11.420724 kubelet[1885]: E0209 09:59:11.420688 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:12.421223 kubelet[1885]: E0209 09:59:12.421184 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:12.873119 env[1360]: time="2024-02-09T09:59:12.872782785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:12.873486 env[1360]: time="2024-02-09T09:59:12.873458042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:12.873570 env[1360]: time="2024-02-09T09:59:12.873550324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:12.873822 env[1360]: time="2024-02-09T09:59:12.873775410Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/477d0f0242436fc3f8e4b95e62b34e4046757a04ee1c426bf734c8998dda187c pid=2962 runtime=io.containerd.runc.v2 Feb 9 09:59:12.891037 systemd[1]: run-containerd-runc-k8s.io-477d0f0242436fc3f8e4b95e62b34e4046757a04ee1c426bf734c8998dda187c-runc.m2seVy.mount: Deactivated successfully. Feb 9 09:59:12.894483 systemd[1]: Started cri-containerd-477d0f0242436fc3f8e4b95e62b34e4046757a04ee1c426bf734c8998dda187c.scope. Feb 9 09:59:12.926679 env[1360]: time="2024-02-09T09:59:12.926633964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-mrhdv,Uid:0d06231d-abe0-4518-96e1-080ce3c0219a,Namespace:default,Attempt:0,} returns sandbox id \"477d0f0242436fc3f8e4b95e62b34e4046757a04ee1c426bf734c8998dda187c\"" Feb 9 09:59:12.928799 env[1360]: time="2024-02-09T09:59:12.928771697Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:59:13.421838 kubelet[1885]: E0209 09:59:13.421788 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:14.423058 kubelet[1885]: E0209 09:59:14.422927 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:15.423527 kubelet[1885]: E0209 09:59:15.423485 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:16.274814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066553495.mount: Deactivated successfully. Feb 9 09:59:16.424039 kubelet[1885]: E0209 09:59:16.423999 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:17.371125 kubelet[1885]: E0209 09:59:17.371067 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:17.424542 kubelet[1885]: E0209 09:59:17.424505 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:17.812797 env[1360]: time="2024-02-09T09:59:17.812727993Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:17.821059 env[1360]: time="2024-02-09T09:59:17.821001421Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:17.825382 env[1360]: time="2024-02-09T09:59:17.825349720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:17.865383 env[1360]: time="2024-02-09T09:59:17.865336908Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:17.866307 env[1360]: time="2024-02-09T09:59:17.866264169Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:59:17.868668 env[1360]: time="2024-02-09T09:59:17.868621422Z" level=info msg="CreateContainer within sandbox \"477d0f0242436fc3f8e4b95e62b34e4046757a04ee1c426bf734c8998dda187c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 09:59:18.076724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553172836.mount: Deactivated successfully. Feb 9 09:59:18.167758 env[1360]: time="2024-02-09T09:59:18.167701151Z" level=info msg="CreateContainer within sandbox \"477d0f0242436fc3f8e4b95e62b34e4046757a04ee1c426bf734c8998dda187c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f4894b0e3f85a2aed1eed22a33fc2bbd26e4f3deaa00fb73c23db4802315cde1\"" Feb 9 09:59:18.168210 env[1360]: time="2024-02-09T09:59:18.168186042Z" level=info msg="StartContainer for \"f4894b0e3f85a2aed1eed22a33fc2bbd26e4f3deaa00fb73c23db4802315cde1\"" Feb 9 09:59:18.186681 systemd[1]: Started cri-containerd-f4894b0e3f85a2aed1eed22a33fc2bbd26e4f3deaa00fb73c23db4802315cde1.scope. Feb 9 09:59:18.215350 env[1360]: time="2024-02-09T09:59:18.215305414Z" level=info msg="StartContainer for \"f4894b0e3f85a2aed1eed22a33fc2bbd26e4f3deaa00fb73c23db4802315cde1\" returns successfully" Feb 9 09:59:18.424782 kubelet[1885]: E0209 09:59:18.424715 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:19.425608 kubelet[1885]: E0209 09:59:19.425575 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:20.427083 kubelet[1885]: E0209 09:59:20.427033 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:21.427862 kubelet[1885]: E0209 09:59:21.427829 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:22.429306 kubelet[1885]: E0209 09:59:22.429265 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:23.429863 kubelet[1885]: E0209 09:59:23.429824 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:24.026763 kubelet[1885]: I0209 09:59:24.026736 1885 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-mrhdv" podStartSLOduration=-9.223372020828081e+09 pod.CreationTimestamp="2024-02-09 09:59:08 +0000 UTC" firstStartedPulling="2024-02-09 09:59:12.928093761 +0000 UTC m=+57.076834887" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:18.628068187 +0000 UTC m=+62.776809313" watchObservedRunningTime="2024-02-09 09:59:24.026694987 +0000 UTC m=+68.175436113" Feb 9 09:59:24.027133 kubelet[1885]: I0209 09:59:24.027117 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:24.031642 systemd[1]: Created slice kubepods-besteffort-pod5785aa93_0922_491f_b527_68632ea24ca6.slice. Feb 9 09:59:24.127702 kubelet[1885]: I0209 09:59:24.127674 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5785aa93-0922-491f-b527-68632ea24ca6-data\") pod \"nfs-server-provisioner-0\" (UID: \"5785aa93-0922-491f-b527-68632ea24ca6\") " pod="default/nfs-server-provisioner-0" Feb 9 09:59:24.127904 kubelet[1885]: I0209 09:59:24.127891 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86tj\" (UniqueName: \"kubernetes.io/projected/5785aa93-0922-491f-b527-68632ea24ca6-kube-api-access-f86tj\") pod \"nfs-server-provisioner-0\" (UID: \"5785aa93-0922-491f-b527-68632ea24ca6\") " pod="default/nfs-server-provisioner-0" Feb 9 09:59:24.335753 env[1360]: time="2024-02-09T09:59:24.335234344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5785aa93-0922-491f-b527-68632ea24ca6,Namespace:default,Attempt:0,}" Feb 9 09:59:24.398468 systemd-networkd[1489]: lxcf570e15873a7: Link UP Feb 9 09:59:24.412337 kernel: eth0: renamed from tmpb6f23 Feb 9 09:59:24.427373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:59:24.427502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf570e15873a7: link becomes ready Feb 9 09:59:24.427732 systemd-networkd[1489]: lxcf570e15873a7: Gained carrier Feb 9 09:59:24.431083 kubelet[1885]: E0209 09:59:24.430758 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:24.621997 env[1360]: time="2024-02-09T09:59:24.621503091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:24.622133 env[1360]: time="2024-02-09T09:59:24.621541852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:24.622133 env[1360]: time="2024-02-09T09:59:24.621552932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:24.622133 env[1360]: time="2024-02-09T09:59:24.621878339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6f23830a5b9691f88ce522928a572e0aa3e796fd787becba4fe31b554e7ca32 pid=3137 runtime=io.containerd.runc.v2 Feb 9 09:59:24.638984 systemd[1]: Started cri-containerd-b6f23830a5b9691f88ce522928a572e0aa3e796fd787becba4fe31b554e7ca32.scope. Feb 9 09:59:24.670582 env[1360]: time="2024-02-09T09:59:24.670535122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5785aa93-0922-491f-b527-68632ea24ca6,Namespace:default,Attempt:0,} returns sandbox id \"b6f23830a5b9691f88ce522928a572e0aa3e796fd787becba4fe31b554e7ca32\"" Feb 9 09:59:24.672252 env[1360]: time="2024-02-09T09:59:24.672221077Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 09:59:25.240228 systemd[1]: run-containerd-runc-k8s.io-b6f23830a5b9691f88ce522928a572e0aa3e796fd787becba4fe31b554e7ca32-runc.JFpaab.mount: Deactivated successfully. Feb 9 09:59:25.431693 kubelet[1885]: E0209 09:59:25.431629 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:26.232591 systemd-networkd[1489]: lxcf570e15873a7: Gained IPv6LL Feb 9 09:59:26.432508 kubelet[1885]: E0209 09:59:26.432470 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:27.433213 kubelet[1885]: E0209 09:59:27.433171 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:28.433475 kubelet[1885]: E0209 09:59:28.433430 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:29.277848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188427897.mount: Deactivated successfully. Feb 9 09:59:29.434132 kubelet[1885]: E0209 09:59:29.434086 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:30.435082 kubelet[1885]: E0209 09:59:30.435041 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:31.435450 kubelet[1885]: E0209 09:59:31.435412 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:32.436042 kubelet[1885]: E0209 09:59:32.436002 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:33.436603 kubelet[1885]: E0209 09:59:33.436545 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:34.098811 env[1360]: time="2024-02-09T09:59:34.098756957Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:34.109685 env[1360]: time="2024-02-09T09:59:34.109634908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:34.116187 env[1360]: time="2024-02-09T09:59:34.116149102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:34.121694 env[1360]: time="2024-02-09T09:59:34.121659838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:34.122342 env[1360]: time="2024-02-09T09:59:34.122313929Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 09:59:34.124775 env[1360]: time="2024-02-09T09:59:34.124743612Z" level=info msg="CreateContainer within sandbox \"b6f23830a5b9691f88ce522928a572e0aa3e796fd787becba4fe31b554e7ca32\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 09:59:34.150162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927207135.mount: Deactivated successfully. Feb 9 09:59:34.154783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301700932.mount: Deactivated successfully. Feb 9 09:59:34.178996 env[1360]: time="2024-02-09T09:59:34.178947480Z" level=info msg="CreateContainer within sandbox \"b6f23830a5b9691f88ce522928a572e0aa3e796fd787becba4fe31b554e7ca32\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6a7c8ff6dc149ca1955ac5accdd1b6b1ca99553f4118d8013d72a8af3a6e5923\"" Feb 9 09:59:34.179634 env[1360]: time="2024-02-09T09:59:34.179607091Z" level=info msg="StartContainer for \"6a7c8ff6dc149ca1955ac5accdd1b6b1ca99553f4118d8013d72a8af3a6e5923\"" Feb 9 09:59:34.197079 systemd[1]: Started cri-containerd-6a7c8ff6dc149ca1955ac5accdd1b6b1ca99553f4118d8013d72a8af3a6e5923.scope. Feb 9 09:59:34.231815 env[1360]: time="2024-02-09T09:59:34.231771284Z" level=info msg="StartContainer for \"6a7c8ff6dc149ca1955ac5accdd1b6b1ca99553f4118d8013d72a8af3a6e5923\" returns successfully" Feb 9 09:59:34.437618 kubelet[1885]: E0209 09:59:34.437581 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:34.662454 kubelet[1885]: I0209 09:59:34.662417 1885 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372025192394e+09 pod.CreationTimestamp="2024-02-09 09:59:23 +0000 UTC" firstStartedPulling="2024-02-09 09:59:24.671824269 +0000 UTC m=+68.820565395" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:34.661605721 +0000 UTC m=+78.810346847" watchObservedRunningTime="2024-02-09 09:59:34.662381294 +0000 UTC m=+78.811122380" Feb 9 09:59:35.437771 kubelet[1885]: E0209 09:59:35.437721 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:36.438634 kubelet[1885]: E0209 09:59:36.438605 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:37.370653 kubelet[1885]: E0209 09:59:37.370612 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:37.440095 kubelet[1885]: E0209 09:59:37.440054 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:38.440463 kubelet[1885]: E0209 09:59:38.440419 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:39.441179 kubelet[1885]: E0209 09:59:39.441145 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:40.442113 kubelet[1885]: E0209 09:59:40.442074 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:41.442413 kubelet[1885]: E0209 09:59:41.442375 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:42.443084 kubelet[1885]: E0209 09:59:42.443052 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:43.443947 kubelet[1885]: E0209 09:59:43.443910 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:44.201396 kubelet[1885]: I0209 09:59:44.201352 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:44.205756 systemd[1]: Created slice kubepods-besteffort-pod4b0e50d9_e8b5_4679_9cd6_7252149ae198.slice. Feb 9 09:59:44.324305 kubelet[1885]: I0209 09:59:44.324262 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgk6t\" (UniqueName: \"kubernetes.io/projected/4b0e50d9-e8b5-4679-9cd6-7252149ae198-kube-api-access-vgk6t\") pod \"test-pod-1\" (UID: \"4b0e50d9-e8b5-4679-9cd6-7252149ae198\") " pod="default/test-pod-1" Feb 9 09:59:44.324520 kubelet[1885]: I0209 09:59:44.324506 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ef132b9-30fd-417a-bf9d-90b91a084e78\" (UniqueName: \"kubernetes.io/nfs/4b0e50d9-e8b5-4679-9cd6-7252149ae198-pvc-8ef132b9-30fd-417a-bf9d-90b91a084e78\") pod \"test-pod-1\" (UID: \"4b0e50d9-e8b5-4679-9cd6-7252149ae198\") " pod="default/test-pod-1" Feb 9 09:59:44.444836 kubelet[1885]: E0209 09:59:44.444810 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:44.476337 kernel: FS-Cache: Loaded Feb 9 09:59:44.516217 kernel: RPC: Registered named UNIX socket transport module. Feb 9 09:59:44.516360 kernel: RPC: Registered udp transport module. Feb 9 09:59:44.526817 kernel: RPC: Registered tcp transport module. Feb 9 09:59:44.526890 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 09:59:44.572320 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 09:59:44.716368 kernel: NFS: Registering the id_resolver key type Feb 9 09:59:44.716487 kernel: Key type id_resolver registered Feb 9 09:59:44.720214 kernel: Key type id_legacy registered Feb 9 09:59:45.116646 nfsidmap[3280]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-ac6bbec117' Feb 9 09:59:45.217938 nfsidmap[3281]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-ac6bbec117' Feb 9 09:59:45.409074 env[1360]: time="2024-02-09T09:59:45.408995081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4b0e50d9-e8b5-4679-9cd6-7252149ae198,Namespace:default,Attempt:0,}" Feb 9 09:59:45.445872 kubelet[1885]: E0209 09:59:45.445835 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:45.475693 systemd-networkd[1489]: lxc57c80bf82f9b: Link UP Feb 9 09:59:45.486479 kernel: eth0: renamed from tmp42c2a Feb 9 09:59:45.505261 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:59:45.505387 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc57c80bf82f9b: link becomes ready Feb 9 09:59:45.507360 systemd-networkd[1489]: lxc57c80bf82f9b: Gained carrier Feb 9 09:59:45.692491 env[1360]: time="2024-02-09T09:59:45.692317986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:45.692491 env[1360]: time="2024-02-09T09:59:45.692359947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:45.692491 env[1360]: time="2024-02-09T09:59:45.692369787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:45.692885 env[1360]: time="2024-02-09T09:59:45.692823314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42c2a7ffa2ff19f44ec76b553b3f62dc8ca9a782a9f105b998a31a4535659595 pid=3306 runtime=io.containerd.runc.v2 Feb 9 09:59:45.710279 systemd[1]: Started cri-containerd-42c2a7ffa2ff19f44ec76b553b3f62dc8ca9a782a9f105b998a31a4535659595.scope. Feb 9 09:59:45.742539 env[1360]: time="2024-02-09T09:59:45.742499356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4b0e50d9-e8b5-4679-9cd6-7252149ae198,Namespace:default,Attempt:0,} returns sandbox id \"42c2a7ffa2ff19f44ec76b553b3f62dc8ca9a782a9f105b998a31a4535659595\"" Feb 9 09:59:45.744810 env[1360]: time="2024-02-09T09:59:45.744783551Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:59:46.142341 env[1360]: time="2024-02-09T09:59:46.142287947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.154437 env[1360]: time="2024-02-09T09:59:46.154376170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.160067 env[1360]: time="2024-02-09T09:59:46.160037856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.167872 env[1360]: time="2024-02-09T09:59:46.167843135Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:46.168828 env[1360]: time="2024-02-09T09:59:46.168798309Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:59:46.171324 env[1360]: time="2024-02-09T09:59:46.171274547Z" level=info msg="CreateContainer within sandbox \"42c2a7ffa2ff19f44ec76b553b3f62dc8ca9a782a9f105b998a31a4535659595\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 09:59:46.210216 env[1360]: time="2024-02-09T09:59:46.210144737Z" level=info msg="CreateContainer within sandbox \"42c2a7ffa2ff19f44ec76b553b3f62dc8ca9a782a9f105b998a31a4535659595\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0fd9b72f544c4e8d6ada9abc67fbf98137a3e01ce0187cf3e3fd835084fb5723\"" Feb 9 09:59:46.210895 env[1360]: time="2024-02-09T09:59:46.210860987Z" level=info msg="StartContainer for \"0fd9b72f544c4e8d6ada9abc67fbf98137a3e01ce0187cf3e3fd835084fb5723\"" Feb 9 09:59:46.224999 systemd[1]: Started cri-containerd-0fd9b72f544c4e8d6ada9abc67fbf98137a3e01ce0187cf3e3fd835084fb5723.scope. Feb 9 09:59:46.259609 env[1360]: time="2024-02-09T09:59:46.259564407Z" level=info msg="StartContainer for \"0fd9b72f544c4e8d6ada9abc67fbf98137a3e01ce0187cf3e3fd835084fb5723\" returns successfully" Feb 9 09:59:46.446544 kubelet[1885]: E0209 09:59:46.446438 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:46.449151 systemd[1]: run-containerd-runc-k8s.io-42c2a7ffa2ff19f44ec76b553b3f62dc8ca9a782a9f105b998a31a4535659595-runc.EB7THA.mount: Deactivated successfully. Feb 9 09:59:46.680743 kubelet[1885]: I0209 09:59:46.680697 1885 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372014174112e+09 pod.CreationTimestamp="2024-02-09 09:59:24 +0000 UTC" firstStartedPulling="2024-02-09 09:59:45.744135701 +0000 UTC m=+89.892876827" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:46.680036787 +0000 UTC m=+90.828777913" watchObservedRunningTime="2024-02-09 09:59:46.680662957 +0000 UTC m=+90.829404083" Feb 9 09:59:46.776444 systemd-networkd[1489]: lxc57c80bf82f9b: Gained IPv6LL Feb 9 09:59:47.447366 kubelet[1885]: E0209 09:59:47.447332 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:48.448084 kubelet[1885]: E0209 09:59:48.448043 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:49.448653 kubelet[1885]: E0209 09:59:49.448618 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:50.450095 kubelet[1885]: E0209 09:59:50.450037 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:51.450302 kubelet[1885]: E0209 09:59:51.450267 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:52.451009 kubelet[1885]: E0209 09:59:52.450969 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:52.866274 env[1360]: time="2024-02-09T09:59:52.866162842Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:59:52.872212 env[1360]: time="2024-02-09T09:59:52.872177888Z" level=info msg="StopContainer for \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\" with timeout 1 (s)" Feb 9 09:59:52.872710 env[1360]: time="2024-02-09T09:59:52.872687575Z" level=info msg="Stop container \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\" with signal terminated" Feb 9 09:59:52.878348 systemd-networkd[1489]: lxc_health: Link DOWN Feb 9 09:59:52.878353 systemd-networkd[1489]: lxc_health: Lost carrier Feb 9 09:59:52.902719 systemd[1]: cri-containerd-3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b.scope: Deactivated successfully. Feb 9 09:59:52.903031 systemd[1]: cri-containerd-3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b.scope: Consumed 6.254s CPU time. Feb 9 09:59:52.920644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b-rootfs.mount: Deactivated successfully. Feb 9 09:59:53.451262 kubelet[1885]: E0209 09:59:53.451214 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:53.452200 env[1360]: time="2024-02-09T09:59:53.452150410Z" level=info msg="shim disconnected" id=3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b Feb 9 09:59:53.452200 env[1360]: time="2024-02-09T09:59:53.452199811Z" level=warning msg="cleaning up after shim disconnected" id=3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b namespace=k8s.io Feb 9 09:59:53.452353 env[1360]: time="2024-02-09T09:59:53.452209531Z" level=info msg="cleaning up dead shim" Feb 9 09:59:53.459487 env[1360]: time="2024-02-09T09:59:53.459443273Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3436 runtime=io.containerd.runc.v2\n" Feb 9 09:59:53.465097 env[1360]: time="2024-02-09T09:59:53.465053993Z" level=info msg="StopContainer for \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\" returns successfully" Feb 9 09:59:53.465661 env[1360]: time="2024-02-09T09:59:53.465627001Z" level=info msg="StopPodSandbox for \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\"" Feb 9 09:59:53.465738 env[1360]: time="2024-02-09T09:59:53.465690162Z" level=info msg="Container to stop \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.465738 env[1360]: time="2024-02-09T09:59:53.465705722Z" level=info msg="Container to stop \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.465738 env[1360]: time="2024-02-09T09:59:53.465717122Z" level=info msg="Container to stop \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.465738 env[1360]: time="2024-02-09T09:59:53.465728762Z" level=info msg="Container to stop \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.467842 env[1360]: time="2024-02-09T09:59:53.465740403Z" level=info msg="Container to stop \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:53.467221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b-shm.mount: Deactivated successfully. Feb 9 09:59:53.472972 systemd[1]: cri-containerd-a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b.scope: Deactivated successfully. Feb 9 09:59:53.492723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b-rootfs.mount: Deactivated successfully. Feb 9 09:59:53.522774 env[1360]: time="2024-02-09T09:59:53.522729011Z" level=info msg="shim disconnected" id=a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b Feb 9 09:59:53.523369 env[1360]: time="2024-02-09T09:59:53.523335499Z" level=warning msg="cleaning up after shim disconnected" id=a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b namespace=k8s.io Feb 9 09:59:53.523369 env[1360]: time="2024-02-09T09:59:53.523360060Z" level=info msg="cleaning up dead shim" Feb 9 09:59:53.524609 env[1360]: time="2024-02-09T09:59:53.524409515Z" level=info msg="shim disconnected" id=64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e Feb 9 09:59:53.524609 env[1360]: time="2024-02-09T09:59:53.524454115Z" level=warning msg="cleaning up after shim disconnected" id=64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e namespace=k8s.io Feb 9 09:59:53.524609 env[1360]: time="2024-02-09T09:59:53.524464475Z" level=info msg="cleaning up dead shim" Feb 9 09:59:53.533766 env[1360]: time="2024-02-09T09:59:53.533718167Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3465 runtime=io.containerd.runc.v2\n" Feb 9 09:59:53.534171 env[1360]: time="2024-02-09T09:59:53.534004891Z" level=info msg="TearDown network for sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" successfully" Feb 9 09:59:53.534171 env[1360]: time="2024-02-09T09:59:53.534032051Z" level=info msg="StopPodSandbox for \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" returns successfully" Feb 9 09:59:53.541605 env[1360]: time="2024-02-09T09:59:53.541559278Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3466 runtime=io.containerd.runc.v2\n" Feb 9 09:59:53.669322 kubelet[1885]: I0209 09:59:53.669192 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-run\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.669322 kubelet[1885]: I0209 09:59:53.669232 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-lib-modules\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.669322 kubelet[1885]: I0209 09:59:53.669240 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.669322 kubelet[1885]: I0209 09:59:53.669267 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.671342 kubelet[1885]: I0209 09:59:53.669286 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.671342 kubelet[1885]: I0209 09:59:53.669250 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-net\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671342 kubelet[1885]: I0209 09:59:53.669592 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-hubble-tls\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671342 kubelet[1885]: I0209 09:59:53.669615 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx4d7\" (UniqueName: \"kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-kube-api-access-jx4d7\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671342 kubelet[1885]: I0209 09:59:53.669659 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cni-path\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671342 kubelet[1885]: I0209 09:59:53.669682 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-bpf-maps\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671588 kubelet[1885]: I0209 09:59:53.669701 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-xtables-lock\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671588 kubelet[1885]: I0209 09:59:53.669722 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-config-path\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671588 kubelet[1885]: I0209 09:59:53.669738 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-kernel\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671588 kubelet[1885]: I0209 09:59:53.669759 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2326389e-238d-4bdf-a44b-64b797fc254e-clustermesh-secrets\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671588 kubelet[1885]: I0209 09:59:53.669776 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-cgroup\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671588 kubelet[1885]: I0209 09:59:53.669793 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-hostproc\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671735 kubelet[1885]: I0209 09:59:53.669809 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-etc-cni-netd\") pod \"2326389e-238d-4bdf-a44b-64b797fc254e\" (UID: \"2326389e-238d-4bdf-a44b-64b797fc254e\") " Feb 9 09:59:53.671735 kubelet[1885]: I0209 09:59:53.669837 1885 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-run\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.671735 kubelet[1885]: I0209 09:59:53.669848 1885 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-lib-modules\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.671735 kubelet[1885]: I0209 09:59:53.669859 1885 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-net\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.671735 kubelet[1885]: I0209 09:59:53.669883 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.671735 kubelet[1885]: W0209 09:59:53.670315 1885 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2326389e-238d-4bdf-a44b-64b797fc254e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:53.671988 kubelet[1885]: I0209 09:59:53.671950 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:53.672051 kubelet[1885]: I0209 09:59:53.672021 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cni-path" (OuterVolumeSpecName: "cni-path") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.672092 kubelet[1885]: I0209 09:59:53.672056 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.672092 kubelet[1885]: I0209 09:59:53.672071 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.672440 kubelet[1885]: I0209 09:59:53.672418 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.672539 kubelet[1885]: I0209 09:59:53.672527 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-hostproc" (OuterVolumeSpecName: "hostproc") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.672620 kubelet[1885]: I0209 09:59:53.672605 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:53.674491 systemd[1]: var-lib-kubelet-pods-2326389e\x2d238d\x2d4bdf\x2da44b\x2d64b797fc254e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djx4d7.mount: Deactivated successfully. Feb 9 09:59:53.675731 kubelet[1885]: I0209 09:59:53.675692 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-kube-api-access-jx4d7" (OuterVolumeSpecName: "kube-api-access-jx4d7") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "kube-api-access-jx4d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.677179 kubelet[1885]: I0209 09:59:53.677140 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:53.677867 kubelet[1885]: I0209 09:59:53.677840 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2326389e-238d-4bdf-a44b-64b797fc254e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2326389e-238d-4bdf-a44b-64b797fc254e" (UID: "2326389e-238d-4bdf-a44b-64b797fc254e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:53.685772 kubelet[1885]: I0209 09:59:53.685746 1885 scope.go:115] "RemoveContainer" containerID="3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b" Feb 9 09:59:53.687612 env[1360]: time="2024-02-09T09:59:53.687310745Z" level=info msg="RemoveContainer for \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\"" Feb 9 09:59:53.690312 systemd[1]: Removed slice kubepods-burstable-pod2326389e_238d_4bdf_a44b_64b797fc254e.slice. Feb 9 09:59:53.690397 systemd[1]: kubepods-burstable-pod2326389e_238d_4bdf_a44b_64b797fc254e.slice: Consumed 6.343s CPU time. Feb 9 09:59:53.700945 env[1360]: time="2024-02-09T09:59:53.700815776Z" level=info msg="RemoveContainer for \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\" returns successfully" Feb 9 09:59:53.701217 kubelet[1885]: I0209 09:59:53.701189 1885 scope.go:115] "RemoveContainer" containerID="7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281" Feb 9 09:59:53.703337 env[1360]: time="2024-02-09T09:59:53.702703123Z" level=info msg="RemoveContainer for \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\"" Feb 9 09:59:53.709497 env[1360]: time="2024-02-09T09:59:53.709463779Z" level=info msg="RemoveContainer for \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\" returns successfully" Feb 9 09:59:53.709876 kubelet[1885]: I0209 09:59:53.709852 1885 scope.go:115] "RemoveContainer" containerID="e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe" Feb 9 09:59:53.710961 env[1360]: time="2024-02-09T09:59:53.710911880Z" level=info msg="RemoveContainer for \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\"" Feb 9 09:59:53.719998 env[1360]: time="2024-02-09T09:59:53.719964928Z" level=info msg="RemoveContainer for \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\" returns successfully" Feb 9 09:59:53.720246 kubelet[1885]: I0209 09:59:53.720229 1885 scope.go:115] "RemoveContainer" containerID="5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028" Feb 9 09:59:53.721453 env[1360]: time="2024-02-09T09:59:53.721428269Z" level=info msg="RemoveContainer for \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\"" Feb 9 09:59:53.730227 env[1360]: time="2024-02-09T09:59:53.730196593Z" level=info msg="RemoveContainer for \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\" returns successfully" Feb 9 09:59:53.730541 kubelet[1885]: I0209 09:59:53.730526 1885 scope.go:115] "RemoveContainer" containerID="64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e" Feb 9 09:59:53.731538 env[1360]: time="2024-02-09T09:59:53.731514052Z" level=info msg="RemoveContainer for \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\"" Feb 9 09:59:53.739843 env[1360]: time="2024-02-09T09:59:53.739809569Z" level=info msg="RemoveContainer for \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\" returns successfully" Feb 9 09:59:53.740157 kubelet[1885]: I0209 09:59:53.740141 1885 scope.go:115] "RemoveContainer" containerID="3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b" Feb 9 09:59:53.740550 env[1360]: time="2024-02-09T09:59:53.740469859Z" level=error msg="ContainerStatus for \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\": not found" Feb 9 09:59:53.740740 kubelet[1885]: E0209 09:59:53.740720 1885 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\": not found" containerID="3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b" Feb 9 09:59:53.740796 kubelet[1885]: I0209 09:59:53.740759 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b} err="failed to get container status \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d4c16a38de257d7cbb3963ebafe311aa40e4453dc9f78f0ad6696a4a009c63b\": not found" Feb 9 09:59:53.740796 kubelet[1885]: I0209 09:59:53.740773 1885 scope.go:115] "RemoveContainer" containerID="7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281" Feb 9 09:59:53.740977 env[1360]: time="2024-02-09T09:59:53.740927545Z" level=error msg="ContainerStatus for \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\": not found" Feb 9 09:59:53.741100 kubelet[1885]: E0209 09:59:53.741081 1885 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\": not found" containerID="7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281" Feb 9 09:59:53.741143 kubelet[1885]: I0209 09:59:53.741111 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281} err="failed to get container status \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\": rpc error: code = NotFound desc = an error occurred when try to find container \"7726dca139e3a31221cd1b19d3a5197c2ba1ebd0cdda6c8fce1b7ac5fcea7281\": not found" Feb 9 09:59:53.741143 kubelet[1885]: I0209 09:59:53.741122 1885 scope.go:115] "RemoveContainer" containerID="e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe" Feb 9 09:59:53.741333 env[1360]: time="2024-02-09T09:59:53.741266710Z" level=error msg="ContainerStatus for \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\": not found" Feb 9 09:59:53.741572 kubelet[1885]: E0209 09:59:53.741506 1885 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\": not found" containerID="e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe" Feb 9 09:59:53.741572 kubelet[1885]: I0209 09:59:53.741538 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe} err="failed to get container status \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2879b89d267cac1b1a44ddc2d47fa36ca2b9004e4092ed540ada52d0175ecbe\": not found" Feb 9 09:59:53.741572 kubelet[1885]: I0209 09:59:53.741549 1885 scope.go:115] "RemoveContainer" containerID="5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028" Feb 9 09:59:53.741751 env[1360]: time="2024-02-09T09:59:53.741688316Z" level=error msg="ContainerStatus for \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\": not found" Feb 9 09:59:53.741863 kubelet[1885]: E0209 09:59:53.741841 1885 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\": not found" containerID="5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028" Feb 9 09:59:53.741917 kubelet[1885]: I0209 09:59:53.741869 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028} err="failed to get container status \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\": rpc error: code = NotFound desc = an error occurred when try to find container \"5abd1f59b90bed2b2906acfbf944539f967911c4c117b7559913c8cad98ee028\": not found" Feb 9 09:59:53.741917 kubelet[1885]: I0209 09:59:53.741878 1885 scope.go:115] "RemoveContainer" containerID="64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e" Feb 9 09:59:53.742483 env[1360]: time="2024-02-09T09:59:53.742428806Z" level=error msg="ContainerStatus for \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\": not found" Feb 9 09:59:53.742697 kubelet[1885]: E0209 09:59:53.742671 1885 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\": not found" containerID="64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e" Feb 9 09:59:53.742746 kubelet[1885]: I0209 09:59:53.742741 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e} err="failed to get container status \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\": rpc error: code = NotFound desc = an error occurred when try to find container \"64190ea539529095aa3b5d274587b730792a5af2c88045365306ed207a61c89e\": not found" Feb 9 09:59:53.770189 kubelet[1885]: I0209 09:59:53.770164 1885 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-bpf-maps\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770340 kubelet[1885]: I0209 09:59:53.770330 1885 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cni-path\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770433 kubelet[1885]: I0209 09:59:53.770424 1885 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-config-path\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770510 kubelet[1885]: I0209 09:59:53.770502 1885 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-host-proc-sys-kernel\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770580 kubelet[1885]: I0209 09:59:53.770573 1885 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2326389e-238d-4bdf-a44b-64b797fc254e-clustermesh-secrets\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770647 kubelet[1885]: I0209 09:59:53.770631 1885 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-cilium-cgroup\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770703 kubelet[1885]: I0209 09:59:53.770696 1885 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-xtables-lock\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770762 kubelet[1885]: I0209 09:59:53.770747 1885 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-hostproc\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770822 kubelet[1885]: I0209 09:59:53.770814 1885 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2326389e-238d-4bdf-a44b-64b797fc254e-etc-cni-netd\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770891 kubelet[1885]: I0209 09:59:53.770883 1885 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-hubble-tls\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.770958 kubelet[1885]: I0209 09:59:53.770943 1885 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jx4d7\" (UniqueName: \"kubernetes.io/projected/2326389e-238d-4bdf-a44b-64b797fc254e-kube-api-access-jx4d7\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:53.855409 systemd[1]: var-lib-kubelet-pods-2326389e\x2d238d\x2d4bdf\x2da44b\x2d64b797fc254e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:53.855504 systemd[1]: var-lib-kubelet-pods-2326389e\x2d238d\x2d4bdf\x2da44b\x2d64b797fc254e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:54.452185 kubelet[1885]: E0209 09:59:54.452147 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:55.453218 kubelet[1885]: E0209 09:59:55.453179 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:55.534730 kubelet[1885]: I0209 09:59:55.534705 1885 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2326389e-238d-4bdf-a44b-64b797fc254e path="/var/lib/kubelet/pods/2326389e-238d-4bdf-a44b-64b797fc254e/volumes" Feb 9 09:59:56.325123 kubelet[1885]: I0209 09:59:56.325092 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:56.325353 kubelet[1885]: E0209 09:59:56.325341 1885 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2326389e-238d-4bdf-a44b-64b797fc254e" containerName="clean-cilium-state" Feb 9 09:59:56.325493 kubelet[1885]: E0209 09:59:56.325483 1885 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2326389e-238d-4bdf-a44b-64b797fc254e" containerName="cilium-agent" Feb 9 09:59:56.325575 kubelet[1885]: E0209 09:59:56.325567 1885 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2326389e-238d-4bdf-a44b-64b797fc254e" containerName="mount-cgroup" Feb 9 09:59:56.325646 kubelet[1885]: E0209 09:59:56.325639 1885 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2326389e-238d-4bdf-a44b-64b797fc254e" containerName="apply-sysctl-overwrites" Feb 9 09:59:56.325703 kubelet[1885]: E0209 09:59:56.325695 1885 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2326389e-238d-4bdf-a44b-64b797fc254e" containerName="mount-bpf-fs" Feb 9 09:59:56.325785 kubelet[1885]: I0209 09:59:56.325776 1885 memory_manager.go:346] "RemoveStaleState removing state" podUID="2326389e-238d-4bdf-a44b-64b797fc254e" containerName="cilium-agent" Feb 9 09:59:56.330235 systemd[1]: Created slice kubepods-besteffort-pode1907bc9_ab2f_4789_b953_883b0b402326.slice. Feb 9 09:59:56.352075 kubelet[1885]: I0209 09:59:56.352043 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:56.356691 systemd[1]: Created slice kubepods-burstable-pode8b31093_befd_45e6_b51c_55bf093f2c8f.slice. Feb 9 09:59:56.359829 kubelet[1885]: W0209 09:59:56.359804 1885 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:56.359933 kubelet[1885]: E0209 09:59:56.359834 1885 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:56.381896 kubelet[1885]: I0209 09:59:56.381869 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1907bc9-ab2f-4789-b953-883b0b402326-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-d66zh\" (UID: \"e1907bc9-ab2f-4789-b953-883b0b402326\") " pod="kube-system/cilium-operator-f59cbd8c6-d66zh" Feb 9 09:59:56.381998 kubelet[1885]: I0209 09:59:56.381910 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszn5\" (UniqueName: \"kubernetes.io/projected/e1907bc9-ab2f-4789-b953-883b0b402326-kube-api-access-tszn5\") pod \"cilium-operator-f59cbd8c6-d66zh\" (UID: \"e1907bc9-ab2f-4789-b953-883b0b402326\") " pod="kube-system/cilium-operator-f59cbd8c6-d66zh" Feb 9 09:59:56.454513 kubelet[1885]: E0209 09:59:56.454482 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:56.482698 kubelet[1885]: I0209 09:59:56.482665 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq2fx\" (UniqueName: \"kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-kube-api-access-zq2fx\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482778 kubelet[1885]: I0209 09:59:56.482711 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-net\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482778 kubelet[1885]: I0209 09:59:56.482732 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-hostproc\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482778 kubelet[1885]: I0209 09:59:56.482751 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-lib-modules\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482778 kubelet[1885]: I0209 09:59:56.482773 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-clustermesh-secrets\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482885 kubelet[1885]: I0209 09:59:56.482792 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-ipsec-secrets\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482885 kubelet[1885]: I0209 09:59:56.482812 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-kernel\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482885 kubelet[1885]: I0209 09:59:56.482831 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cni-path\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482885 kubelet[1885]: I0209 09:59:56.482853 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-xtables-lock\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.482885 kubelet[1885]: I0209 09:59:56.482871 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-hubble-tls\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.483004 kubelet[1885]: I0209 09:59:56.482892 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-bpf-maps\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.483004 kubelet[1885]: I0209 09:59:56.482911 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-cgroup\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.483004 kubelet[1885]: I0209 09:59:56.482941 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-run\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.483004 kubelet[1885]: I0209 09:59:56.482959 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-etc-cni-netd\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.483004 kubelet[1885]: I0209 09:59:56.482978 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-config-path\") pod \"cilium-j8277\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " pod="kube-system/cilium-j8277" Feb 9 09:59:56.634055 env[1360]: time="2024-02-09T09:59:56.633382215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-d66zh,Uid:e1907bc9-ab2f-4789-b953-883b0b402326,Namespace:kube-system,Attempt:0,}" Feb 9 09:59:56.673130 env[1360]: time="2024-02-09T09:59:56.673039283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:56.673281 env[1360]: time="2024-02-09T09:59:56.673148045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:56.673281 env[1360]: time="2024-02-09T09:59:56.673174445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:56.673462 env[1360]: time="2024-02-09T09:59:56.673417048Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a355bc3ce653d7fdb76413a43bb0ccc084ba5a52387c7b20ef9658d12346bade pid=3506 runtime=io.containerd.runc.v2 Feb 9 09:59:56.683613 systemd[1]: Started cri-containerd-a355bc3ce653d7fdb76413a43bb0ccc084ba5a52387c7b20ef9658d12346bade.scope. Feb 9 09:59:56.716181 env[1360]: time="2024-02-09T09:59:56.716138959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-d66zh,Uid:e1907bc9-ab2f-4789-b953-883b0b402326,Namespace:kube-system,Attempt:0,} returns sandbox id \"a355bc3ce653d7fdb76413a43bb0ccc084ba5a52387c7b20ef9658d12346bade\"" Feb 9 09:59:56.718049 env[1360]: time="2024-02-09T09:59:56.718020265Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:59:57.370786 kubelet[1885]: E0209 09:59:57.370744 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:57.455131 kubelet[1885]: E0209 09:59:57.455092 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:57.498689 kubelet[1885]: E0209 09:59:57.498658 1885 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:59:57.565182 env[1360]: time="2024-02-09T09:59:57.565107627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j8277,Uid:e8b31093-befd-45e6-b51c-55bf093f2c8f,Namespace:kube-system,Attempt:0,}" Feb 9 09:59:57.603549 env[1360]: time="2024-02-09T09:59:57.603428472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:59:57.603549 env[1360]: time="2024-02-09T09:59:57.603465153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:59:57.603749 env[1360]: time="2024-02-09T09:59:57.603474953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:59:57.604089 env[1360]: time="2024-02-09T09:59:57.603993840Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755 pid=3549 runtime=io.containerd.runc.v2 Feb 9 09:59:57.623603 systemd[1]: Started cri-containerd-250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755.scope. Feb 9 09:59:57.646164 env[1360]: time="2024-02-09T09:59:57.646119937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j8277,Uid:e8b31093-befd-45e6-b51c-55bf093f2c8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\"" Feb 9 09:59:57.648715 env[1360]: time="2024-02-09T09:59:57.648675172Z" level=info msg="CreateContainer within sandbox \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:59:57.677517 env[1360]: time="2024-02-09T09:59:57.677428886Z" level=info msg="CreateContainer within sandbox \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\"" Feb 9 09:59:57.678125 env[1360]: time="2024-02-09T09:59:57.678101495Z" level=info msg="StartContainer for \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\"" Feb 9 09:59:57.692663 systemd[1]: Started cri-containerd-32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2.scope. Feb 9 09:59:57.705449 systemd[1]: cri-containerd-32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2.scope: Deactivated successfully. Feb 9 09:59:57.743244 env[1360]: time="2024-02-09T09:59:57.743193828Z" level=info msg="shim disconnected" id=32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2 Feb 9 09:59:57.743505 env[1360]: time="2024-02-09T09:59:57.743485232Z" level=warning msg="cleaning up after shim disconnected" id=32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2 namespace=k8s.io Feb 9 09:59:57.743570 env[1360]: time="2024-02-09T09:59:57.743557193Z" level=info msg="cleaning up dead shim" Feb 9 09:59:57.751567 env[1360]: time="2024-02-09T09:59:57.751524302Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3608 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T09:59:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 09:59:57.752055 env[1360]: time="2024-02-09T09:59:57.751951948Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Feb 9 09:59:57.752206 env[1360]: time="2024-02-09T09:59:57.752138110Z" level=error msg="Failed to pipe stdout of container \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\"" error="reading from a closed fifo" Feb 9 09:59:57.752360 env[1360]: time="2024-02-09T09:59:57.752328153Z" level=error msg="Failed to pipe stderr of container \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\"" error="reading from a closed fifo" Feb 9 09:59:57.756542 env[1360]: time="2024-02-09T09:59:57.756484690Z" level=error msg="StartContainer for \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 09:59:57.756753 kubelet[1885]: E0209 09:59:57.756727 1885 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2" Feb 9 09:59:57.756880 kubelet[1885]: E0209 09:59:57.756859 1885 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 09:59:57.756880 kubelet[1885]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 09:59:57.756880 kubelet[1885]: rm /hostbin/cilium-mount Feb 9 09:59:57.756880 kubelet[1885]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zq2fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-j8277_kube-system(e8b31093-befd-45e6-b51c-55bf093f2c8f): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 09:59:57.757038 kubelet[1885]: E0209 09:59:57.756902 1885 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-j8277" podUID=e8b31093-befd-45e6-b51c-55bf093f2c8f Feb 9 09:59:58.455497 kubelet[1885]: E0209 09:59:58.455458 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:58.507056 systemd[1]: run-containerd-runc-k8s.io-250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755-runc.XABwbn.mount: Deactivated successfully. Feb 9 09:59:58.699095 env[1360]: time="2024-02-09T09:59:58.698950491Z" level=info msg="StopPodSandbox for \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\"" Feb 9 09:59:58.699095 env[1360]: time="2024-02-09T09:59:58.699021932Z" level=info msg="Container to stop \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:59:58.700602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755-shm.mount: Deactivated successfully. Feb 9 09:59:58.710650 systemd[1]: cri-containerd-250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755.scope: Deactivated successfully. Feb 9 09:59:58.740366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755-rootfs.mount: Deactivated successfully. Feb 9 09:59:58.977984 env[1360]: time="2024-02-09T09:59:58.977745441Z" level=info msg="shim disconnected" id=250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755 Feb 9 09:59:58.977984 env[1360]: time="2024-02-09T09:59:58.977789002Z" level=warning msg="cleaning up after shim disconnected" id=250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755 namespace=k8s.io Feb 9 09:59:58.977984 env[1360]: time="2024-02-09T09:59:58.977797282Z" level=info msg="cleaning up dead shim" Feb 9 09:59:58.985392 env[1360]: time="2024-02-09T09:59:58.985345584Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3640 runtime=io.containerd.runc.v2\n" Feb 9 09:59:58.987524 env[1360]: time="2024-02-09T09:59:58.987491534Z" level=info msg="TearDown network for sandbox \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" successfully" Feb 9 09:59:58.987574 env[1360]: time="2024-02-09T09:59:58.987522414Z" level=info msg="StopPodSandbox for \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" returns successfully" Feb 9 09:59:59.058694 env[1360]: time="2024-02-09T09:59:59.058651695Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:59.097922 kubelet[1885]: I0209 09:59:59.097799 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-kernel\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.097922 kubelet[1885]: I0209 09:59:59.097842 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.097922 kubelet[1885]: I0209 09:59:59.097850 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-cgroup\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.097922 kubelet[1885]: I0209 09:59:59.097886 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.097922 kubelet[1885]: I0209 09:59:59.097902 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq2fx\" (UniqueName: \"kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-kube-api-access-zq2fx\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.098214 kubelet[1885]: I0209 09:59:59.097922 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-bpf-maps\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.098214 kubelet[1885]: I0209 09:59:59.097940 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-run\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.098214 kubelet[1885]: I0209 09:59:59.097961 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-config-path\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.098214 kubelet[1885]: I0209 09:59:59.097979 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-lib-modules\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.098214 kubelet[1885]: I0209 09:59:59.097998 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cni-path\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.098214 kubelet[1885]: I0209 09:59:59.098015 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-net\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.099963 kubelet[1885]: I0209 09:59:59.098032 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-xtables-lock\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.099963 kubelet[1885]: I0209 09:59:59.098051 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-hubble-tls\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.099963 kubelet[1885]: I0209 09:59:59.098067 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-hostproc\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.099963 kubelet[1885]: I0209 09:59:59.098101 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-clustermesh-secrets\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.099963 kubelet[1885]: I0209 09:59:59.098123 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-ipsec-secrets\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.099963 kubelet[1885]: I0209 09:59:59.098140 1885 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-etc-cni-netd\") pod \"e8b31093-befd-45e6-b51c-55bf093f2c8f\" (UID: \"e8b31093-befd-45e6-b51c-55bf093f2c8f\") " Feb 9 09:59:59.100116 kubelet[1885]: I0209 09:59:59.098170 1885 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-kernel\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.100116 kubelet[1885]: I0209 09:59:59.098180 1885 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-cgroup\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.100116 kubelet[1885]: I0209 09:59:59.098197 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100116 kubelet[1885]: I0209 09:59:59.098338 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100116 kubelet[1885]: I0209 09:59:59.098367 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100239 kubelet[1885]: I0209 09:59:59.098383 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100239 kubelet[1885]: W0209 09:59:59.098528 1885 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e8b31093-befd-45e6-b51c-55bf093f2c8f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 09:59:59.100239 kubelet[1885]: I0209 09:59:59.098818 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100239 kubelet[1885]: I0209 09:59:59.099061 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100239 kubelet[1885]: I0209 09:59:59.099094 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cni-path" (OuterVolumeSpecName: "cni-path") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100372 kubelet[1885]: I0209 09:59:59.099313 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-hostproc" (OuterVolumeSpecName: "hostproc") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:59:59.100935 kubelet[1885]: I0209 09:59:59.100567 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:59:59.105161 systemd[1]: var-lib-kubelet-pods-e8b31093\x2dbefd\x2d45e6\x2db51c\x2d55bf093f2c8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzq2fx.mount: Deactivated successfully. Feb 9 09:59:59.105265 systemd[1]: var-lib-kubelet-pods-e8b31093\x2dbefd\x2d45e6\x2db51c\x2d55bf093f2c8f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:59:59.108471 kubelet[1885]: I0209 09:59:59.108318 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:59.109369 kubelet[1885]: I0209 09:59:59.108610 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:59.109369 kubelet[1885]: I0209 09:59:59.108652 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-kube-api-access-zq2fx" (OuterVolumeSpecName: "kube-api-access-zq2fx") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "kube-api-access-zq2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:59:59.109660 env[1360]: time="2024-02-09T09:59:59.109622103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:59.110718 kubelet[1885]: I0209 09:59:59.110692 1885 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e8b31093-befd-45e6-b51c-55bf093f2c8f" (UID: "e8b31093-befd-45e6-b51c-55bf093f2c8f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:59:59.116824 env[1360]: time="2024-02-09T09:59:59.116778119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:59:59.117407 env[1360]: time="2024-02-09T09:59:59.117373367Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:59:59.119267 env[1360]: time="2024-02-09T09:59:59.119232032Z" level=info msg="CreateContainer within sandbox \"a355bc3ce653d7fdb76413a43bb0ccc084ba5a52387c7b20ef9658d12346bade\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:59:59.163478 env[1360]: time="2024-02-09T09:59:59.163404628Z" level=info msg="CreateContainer within sandbox \"a355bc3ce653d7fdb76413a43bb0ccc084ba5a52387c7b20ef9658d12346bade\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9af37b9e40f24bdaf8a4a85ca47ceedc897df7a55d78b1ef7bcb41b508676b3d\"" Feb 9 09:59:59.163975 env[1360]: time="2024-02-09T09:59:59.163928515Z" level=info msg="StartContainer for \"9af37b9e40f24bdaf8a4a85ca47ceedc897df7a55d78b1ef7bcb41b508676b3d\"" Feb 9 09:59:59.178106 systemd[1]: Started cri-containerd-9af37b9e40f24bdaf8a4a85ca47ceedc897df7a55d78b1ef7bcb41b508676b3d.scope. Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198416 1885 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-config-path\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198450 1885 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-lib-modules\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198461 1885 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cni-path\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198471 1885 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-host-proc-sys-net\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198481 1885 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-hubble-tls\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198490 1885 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-hostproc\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198499 1885 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-clustermesh-secrets\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199509 kubelet[1885]: I0209 09:59:59.198508 1885 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-ipsec-secrets\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199809 kubelet[1885]: I0209 09:59:59.198541 1885 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-etc-cni-netd\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199809 kubelet[1885]: I0209 09:59:59.198550 1885 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-xtables-lock\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199809 kubelet[1885]: I0209 09:59:59.198559 1885 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-zq2fx\" (UniqueName: \"kubernetes.io/projected/e8b31093-befd-45e6-b51c-55bf093f2c8f-kube-api-access-zq2fx\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199809 kubelet[1885]: I0209 09:59:59.198568 1885 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-bpf-maps\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.199809 kubelet[1885]: I0209 09:59:59.198578 1885 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e8b31093-befd-45e6-b51c-55bf093f2c8f-cilium-run\") on node \"10.200.20.10\" DevicePath \"\"" Feb 9 09:59:59.207285 env[1360]: time="2024-02-09T09:59:59.207224939Z" level=info msg="StartContainer for \"9af37b9e40f24bdaf8a4a85ca47ceedc897df7a55d78b1ef7bcb41b508676b3d\" returns successfully" Feb 9 09:59:59.456168 kubelet[1885]: E0209 09:59:59.456132 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:59:59.507835 systemd[1]: var-lib-kubelet-pods-e8b31093\x2dbefd\x2d45e6\x2db51c\x2d55bf093f2c8f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:59.507923 systemd[1]: var-lib-kubelet-pods-e8b31093\x2dbefd\x2d45e6\x2db51c\x2d55bf093f2c8f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:59:59.536669 systemd[1]: Removed slice kubepods-burstable-pode8b31093_befd_45e6_b51c_55bf093f2c8f.slice. Feb 9 09:59:59.701453 kubelet[1885]: I0209 09:59:59.701415 1885 scope.go:115] "RemoveContainer" containerID="32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2" Feb 9 09:59:59.706487 env[1360]: time="2024-02-09T09:59:59.706394752Z" level=info msg="RemoveContainer for \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\"" Feb 9 09:59:59.714516 env[1360]: time="2024-02-09T09:59:59.714479701Z" level=info msg="RemoveContainer for \"32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2\" returns successfully" Feb 9 09:59:59.755757 kubelet[1885]: I0209 09:59:59.755729 1885 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:59:59.755975 kubelet[1885]: E0209 09:59:59.755961 1885 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e8b31093-befd-45e6-b51c-55bf093f2c8f" containerName="mount-cgroup" Feb 9 09:59:59.756084 kubelet[1885]: I0209 09:59:59.756073 1885 memory_manager.go:346] "RemoveStaleState removing state" podUID="e8b31093-befd-45e6-b51c-55bf093f2c8f" containerName="mount-cgroup" Feb 9 09:59:59.756236 kubelet[1885]: I0209 09:59:59.755745 1885 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-d66zh" podStartSLOduration=-9.22337203309906e+09 pod.CreationTimestamp="2024-02-09 09:59:56 +0000 UTC" firstStartedPulling="2024-02-09 09:59:56.717592499 +0000 UTC m=+100.866333625" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:59:59.755047649 +0000 UTC m=+103.903788775" watchObservedRunningTime="2024-02-09 09:59:59.755715778 +0000 UTC m=+103.904456904" Feb 9 09:59:59.760733 systemd[1]: Created slice kubepods-burstable-pod2514557d_addd_4812_a603_d323ea4868a8.slice. Feb 9 09:59:59.761809 kubelet[1885]: W0209 09:59:59.761778 1885 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:59.761809 kubelet[1885]: E0209 09:59:59.761806 1885 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:59.761913 kubelet[1885]: W0209 09:59:59.761840 1885 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:59.761913 kubelet[1885]: E0209 09:59:59.761850 1885 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:59.761913 kubelet[1885]: W0209 09:59:59.761880 1885 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:59.761913 kubelet[1885]: E0209 09:59:59.761891 1885 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.200.20.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.20.10' and this object Feb 9 09:59:59.801913 kubelet[1885]: I0209 09:59:59.801876 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-lib-modules\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.801913 kubelet[1885]: I0209 09:59:59.801919 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-host-proc-sys-net\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802081 kubelet[1885]: I0209 09:59:59.801939 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2514557d-addd-4812-a603-d323ea4868a8-hubble-tls\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802081 kubelet[1885]: I0209 09:59:59.801961 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-cni-path\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802081 kubelet[1885]: I0209 09:59:59.801991 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-xtables-lock\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802081 kubelet[1885]: I0209 09:59:59.802011 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2514557d-addd-4812-a603-d323ea4868a8-cilium-ipsec-secrets\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802081 kubelet[1885]: I0209 09:59:59.802045 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-host-proc-sys-kernel\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802081 kubelet[1885]: I0209 09:59:59.802080 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-etc-cni-netd\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802226 kubelet[1885]: I0209 09:59:59.802100 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-cilium-run\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802226 kubelet[1885]: I0209 09:59:59.802124 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-bpf-maps\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802226 kubelet[1885]: I0209 09:59:59.802148 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-hostproc\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802226 kubelet[1885]: I0209 09:59:59.802169 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2514557d-addd-4812-a603-d323ea4868a8-cilium-cgroup\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802226 kubelet[1885]: I0209 09:59:59.802189 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2514557d-addd-4812-a603-d323ea4868a8-clustermesh-secrets\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802226 kubelet[1885]: I0209 09:59:59.802209 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2514557d-addd-4812-a603-d323ea4868a8-cilium-config-path\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 09:59:59.802409 kubelet[1885]: I0209 09:59:59.802227 1885 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnx5q\" (UniqueName: \"kubernetes.io/projected/2514557d-addd-4812-a603-d323ea4868a8-kube-api-access-xnx5q\") pod \"cilium-wg6nc\" (UID: \"2514557d-addd-4812-a603-d323ea4868a8\") " pod="kube-system/cilium-wg6nc" Feb 9 10:00:00.457007 kubelet[1885]: E0209 10:00:00.456974 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:00.847905 kubelet[1885]: W0209 10:00:00.847801 1885 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8b31093_befd_45e6_b51c_55bf093f2c8f.slice/cri-containerd-32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2.scope WatchSource:0}: container "32f5d9c3c920081b31f3d19929d8a5becef68f7f0de97108730e8ad01fbb0fd2" in namespace "k8s.io": not found Feb 9 10:00:00.903926 kubelet[1885]: E0209 10:00:00.903898 1885 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 9 10:00:00.904123 kubelet[1885]: E0209 10:00:00.903901 1885 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 9 10:00:00.904247 kubelet[1885]: E0209 10:00:00.904236 1885 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2514557d-addd-4812-a603-d323ea4868a8-cilium-ipsec-secrets podName:2514557d-addd-4812-a603-d323ea4868a8 nodeName:}" failed. No retries permitted until 2024-02-09 10:00:01.404211737 +0000 UTC m=+105.552952863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/2514557d-addd-4812-a603-d323ea4868a8-cilium-ipsec-secrets") pod "cilium-wg6nc" (UID: "2514557d-addd-4812-a603-d323ea4868a8") : failed to sync secret cache: timed out waiting for the condition Feb 9 10:00:00.904603 kubelet[1885]: E0209 10:00:00.904591 1885 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2514557d-addd-4812-a603-d323ea4868a8-clustermesh-secrets podName:2514557d-addd-4812-a603-d323ea4868a8 nodeName:}" failed. No retries permitted until 2024-02-09 10:00:01.404576702 +0000 UTC m=+105.553317828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/2514557d-addd-4812-a603-d323ea4868a8-clustermesh-secrets") pod "cilium-wg6nc" (UID: "2514557d-addd-4812-a603-d323ea4868a8") : failed to sync secret cache: timed out waiting for the condition Feb 9 10:00:00.905013 kubelet[1885]: E0209 10:00:00.904992 1885 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 10:00:00.905013 kubelet[1885]: E0209 10:00:00.905014 1885 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-wg6nc: failed to sync secret cache: timed out waiting for the condition Feb 9 10:00:00.905098 kubelet[1885]: E0209 10:00:00.905080 1885 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2514557d-addd-4812-a603-d323ea4868a8-hubble-tls podName:2514557d-addd-4812-a603-d323ea4868a8 nodeName:}" failed. No retries permitted until 2024-02-09 10:00:01.405066588 +0000 UTC m=+105.553807714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/2514557d-addd-4812-a603-d323ea4868a8-hubble-tls") pod "cilium-wg6nc" (UID: "2514557d-addd-4812-a603-d323ea4868a8") : failed to sync secret cache: timed out waiting for the condition Feb 9 10:00:01.458060 kubelet[1885]: E0209 10:00:01.458032 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:01.534090 kubelet[1885]: I0209 10:00:01.534055 1885 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e8b31093-befd-45e6-b51c-55bf093f2c8f path="/var/lib/kubelet/pods/e8b31093-befd-45e6-b51c-55bf093f2c8f/volumes" Feb 9 10:00:01.568872 env[1360]: time="2024-02-09T10:00:01.568494173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wg6nc,Uid:2514557d-addd-4812-a603-d323ea4868a8,Namespace:kube-system,Attempt:0,}" Feb 9 10:00:01.599394 env[1360]: time="2024-02-09T10:00:01.599317423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:00:01.599538 env[1360]: time="2024-02-09T10:00:01.599411384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:00:01.599538 env[1360]: time="2024-02-09T10:00:01.599442264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:00:01.599710 env[1360]: time="2024-02-09T10:00:01.599652147Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0 pid=3707 runtime=io.containerd.runc.v2 Feb 9 10:00:01.614734 systemd[1]: Started cri-containerd-d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0.scope. Feb 9 10:00:01.640609 env[1360]: time="2024-02-09T10:00:01.640556611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wg6nc,Uid:2514557d-addd-4812-a603-d323ea4868a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\"" Feb 9 10:00:01.643477 env[1360]: time="2024-02-09T10:00:01.643441969Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:00:01.681209 env[1360]: time="2024-02-09T10:00:01.681145750Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e74497978f49fd13e9058c23ec6b87ca5fba9315323c903cdb84bcca7fe0996a\"" Feb 9 10:00:01.681949 env[1360]: time="2024-02-09T10:00:01.681907720Z" level=info msg="StartContainer for \"e74497978f49fd13e9058c23ec6b87ca5fba9315323c903cdb84bcca7fe0996a\"" Feb 9 10:00:01.695706 systemd[1]: Started cri-containerd-e74497978f49fd13e9058c23ec6b87ca5fba9315323c903cdb84bcca7fe0996a.scope. Feb 9 10:00:01.731035 env[1360]: time="2024-02-09T10:00:01.730919531Z" level=info msg="StartContainer for \"e74497978f49fd13e9058c23ec6b87ca5fba9315323c903cdb84bcca7fe0996a\" returns successfully" Feb 9 10:00:01.734944 systemd[1]: cri-containerd-e74497978f49fd13e9058c23ec6b87ca5fba9315323c903cdb84bcca7fe0996a.scope: Deactivated successfully. Feb 9 10:00:01.798534 env[1360]: time="2024-02-09T10:00:01.798488869Z" level=info msg="shim disconnected" id=e74497978f49fd13e9058c23ec6b87ca5fba9315323c903cdb84bcca7fe0996a Feb 9 10:00:01.798784 env[1360]: time="2024-02-09T10:00:01.798766393Z" level=warning msg="cleaning up after shim disconnected" id=e74497978f49fd13e9058c23ec6b87ca5fba9315323c903cdb84bcca7fe0996a namespace=k8s.io Feb 9 10:00:01.798986 env[1360]: time="2024-02-09T10:00:01.798959395Z" level=info msg="cleaning up dead shim" Feb 9 10:00:01.806952 env[1360]: time="2024-02-09T10:00:01.806909781Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Feb 9 10:00:02.029037 kubelet[1885]: I0209 10:00:02.028928 1885 setters.go:548] "Node became not ready" node="10.200.20.10" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:00:02.028878568 +0000 UTC m=+106.177619694 LastTransitionTime:2024-02-09 10:00:02.028878568 +0000 UTC m=+106.177619694 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:00:02.459584 kubelet[1885]: E0209 10:00:02.459551 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:02.499271 kubelet[1885]: E0209 10:00:02.499252 1885 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:00:02.716560 env[1360]: time="2024-02-09T10:00:02.716446917Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:00:02.744958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209491102.mount: Deactivated successfully. Feb 9 10:00:02.750145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890451181.mount: Deactivated successfully. Feb 9 10:00:02.760493 env[1360]: time="2024-02-09T10:00:02.760444697Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"797fe8b693d8910ae04d3585a9b8e708364f72d9103fedf5485c6355802e7d35\"" Feb 9 10:00:02.761374 env[1360]: time="2024-02-09T10:00:02.761348549Z" level=info msg="StartContainer for \"797fe8b693d8910ae04d3585a9b8e708364f72d9103fedf5485c6355802e7d35\"" Feb 9 10:00:02.775589 systemd[1]: Started cri-containerd-797fe8b693d8910ae04d3585a9b8e708364f72d9103fedf5485c6355802e7d35.scope. Feb 9 10:00:02.806828 systemd[1]: cri-containerd-797fe8b693d8910ae04d3585a9b8e708364f72d9103fedf5485c6355802e7d35.scope: Deactivated successfully. Feb 9 10:00:02.811700 env[1360]: time="2024-02-09T10:00:02.811647813Z" level=info msg="StartContainer for \"797fe8b693d8910ae04d3585a9b8e708364f72d9103fedf5485c6355802e7d35\" returns successfully" Feb 9 10:00:02.846711 env[1360]: time="2024-02-09T10:00:02.846654194Z" level=info msg="shim disconnected" id=797fe8b693d8910ae04d3585a9b8e708364f72d9103fedf5485c6355802e7d35 Feb 9 10:00:02.846948 env[1360]: time="2024-02-09T10:00:02.846919958Z" level=warning msg="cleaning up after shim disconnected" id=797fe8b693d8910ae04d3585a9b8e708364f72d9103fedf5485c6355802e7d35 namespace=k8s.io Feb 9 10:00:02.847026 env[1360]: time="2024-02-09T10:00:02.847012839Z" level=info msg="cleaning up dead shim" Feb 9 10:00:02.854574 env[1360]: time="2024-02-09T10:00:02.854530058Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3854 runtime=io.containerd.runc.v2\n" Feb 9 10:00:03.460243 kubelet[1885]: E0209 10:00:03.460186 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:03.718506 env[1360]: time="2024-02-09T10:00:03.718272945Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:00:03.754663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003981921.mount: Deactivated successfully. Feb 9 10:00:03.760465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810594998.mount: Deactivated successfully. Feb 9 10:00:03.775246 env[1360]: time="2024-02-09T10:00:03.775196211Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d41607c746064458503221388bc032d6e827e10fc924bea933bd6ba68b1ada0\"" Feb 9 10:00:03.776189 env[1360]: time="2024-02-09T10:00:03.776106662Z" level=info msg="StartContainer for \"1d41607c746064458503221388bc032d6e827e10fc924bea933bd6ba68b1ada0\"" Feb 9 10:00:03.790755 systemd[1]: Started cri-containerd-1d41607c746064458503221388bc032d6e827e10fc924bea933bd6ba68b1ada0.scope. Feb 9 10:00:03.824837 env[1360]: time="2024-02-09T10:00:03.824791460Z" level=info msg="StartContainer for \"1d41607c746064458503221388bc032d6e827e10fc924bea933bd6ba68b1ada0\" returns successfully" Feb 9 10:00:03.825413 systemd[1]: cri-containerd-1d41607c746064458503221388bc032d6e827e10fc924bea933bd6ba68b1ada0.scope: Deactivated successfully. Feb 9 10:00:03.870071 env[1360]: time="2024-02-09T10:00:03.870018452Z" level=info msg="shim disconnected" id=1d41607c746064458503221388bc032d6e827e10fc924bea933bd6ba68b1ada0 Feb 9 10:00:03.870071 env[1360]: time="2024-02-09T10:00:03.870068253Z" level=warning msg="cleaning up after shim disconnected" id=1d41607c746064458503221388bc032d6e827e10fc924bea933bd6ba68b1ada0 namespace=k8s.io Feb 9 10:00:03.870071 env[1360]: time="2024-02-09T10:00:03.870078413Z" level=info msg="cleaning up dead shim" Feb 9 10:00:03.877439 env[1360]: time="2024-02-09T10:00:03.877385589Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3909 runtime=io.containerd.runc.v2\n" Feb 9 10:00:04.461177 kubelet[1885]: E0209 10:00:04.461133 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:04.721381 env[1360]: time="2024-02-09T10:00:04.721135734Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:00:04.755695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041873382.mount: Deactivated successfully. Feb 9 10:00:04.766908 env[1360]: time="2024-02-09T10:00:04.766848888Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60\"" Feb 9 10:00:04.767419 env[1360]: time="2024-02-09T10:00:04.767393455Z" level=info msg="StartContainer for \"e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60\"" Feb 9 10:00:04.782784 systemd[1]: Started cri-containerd-e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60.scope. Feb 9 10:00:04.807321 systemd[1]: cri-containerd-e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60.scope: Deactivated successfully. Feb 9 10:00:04.811170 env[1360]: time="2024-02-09T10:00:04.810266893Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2514557d_addd_4812_a603_d323ea4868a8.slice/cri-containerd-e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60.scope/memory.events\": no such file or directory" Feb 9 10:00:04.816451 env[1360]: time="2024-02-09T10:00:04.816407293Z" level=info msg="StartContainer for \"e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60\" returns successfully" Feb 9 10:00:04.845948 env[1360]: time="2024-02-09T10:00:04.845902916Z" level=info msg="shim disconnected" id=e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60 Feb 9 10:00:04.846255 env[1360]: time="2024-02-09T10:00:04.846236800Z" level=warning msg="cleaning up after shim disconnected" id=e8162516523580df4da2993b9cfa48b5ac50c0d48849f67aba6626cc75e82a60 namespace=k8s.io Feb 9 10:00:04.846397 env[1360]: time="2024-02-09T10:00:04.846380562Z" level=info msg="cleaning up dead shim" Feb 9 10:00:04.854349 env[1360]: time="2024-02-09T10:00:04.854309905Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:00:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3965 runtime=io.containerd.runc.v2\n" Feb 9 10:00:05.461572 kubelet[1885]: E0209 10:00:05.461533 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:05.726464 env[1360]: time="2024-02-09T10:00:05.726355782Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:00:05.758172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788664507.mount: Deactivated successfully. Feb 9 10:00:05.762652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756967825.mount: Deactivated successfully. Feb 9 10:00:05.773606 env[1360]: time="2024-02-09T10:00:05.773559792Z" level=info msg="CreateContainer within sandbox \"d410977b10a4050db50fd57bb3ee6a8645aa6fcdbb83e1e8f8a8aaa0f755c9d0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c\"" Feb 9 10:00:05.774613 env[1360]: time="2024-02-09T10:00:05.774579645Z" level=info msg="StartContainer for \"0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c\"" Feb 9 10:00:05.788504 systemd[1]: Started cri-containerd-0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c.scope. Feb 9 10:00:05.820640 env[1360]: time="2024-02-09T10:00:05.820577679Z" level=info msg="StartContainer for \"0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c\" returns successfully" Feb 9 10:00:06.147365 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 10:00:06.462455 kubelet[1885]: E0209 10:00:06.462336 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:06.742700 kubelet[1885]: I0209 10:00:06.742543 1885 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wg6nc" podStartSLOduration=7.7424989239999995 pod.CreationTimestamp="2024-02-09 09:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:00:06.742081399 +0000 UTC m=+110.890822525" watchObservedRunningTime="2024-02-09 10:00:06.742498924 +0000 UTC m=+110.891240050" Feb 9 10:00:07.462512 kubelet[1885]: E0209 10:00:07.462457 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:08.442958 systemd[1]: run-containerd-runc-k8s.io-0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c-runc.aqZW9z.mount: Deactivated successfully. Feb 9 10:00:08.464699 kubelet[1885]: E0209 10:00:08.464643 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:08.673465 systemd-networkd[1489]: lxc_health: Link UP Feb 9 10:00:08.701602 systemd-networkd[1489]: lxc_health: Gained carrier Feb 9 10:00:08.702406 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:00:09.464800 kubelet[1885]: E0209 10:00:09.464763 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:10.072598 systemd-networkd[1489]: lxc_health: Gained IPv6LL Feb 9 10:00:10.466266 kubelet[1885]: E0209 10:00:10.466215 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:10.570512 systemd[1]: run-containerd-runc-k8s.io-0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c-runc.GGG2cm.mount: Deactivated successfully. Feb 9 10:00:11.466398 kubelet[1885]: E0209 10:00:11.466363 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:12.467593 kubelet[1885]: E0209 10:00:12.467561 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:12.708687 systemd[1]: run-containerd-runc-k8s.io-0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c-runc.ghCOJh.mount: Deactivated successfully. Feb 9 10:00:13.468222 kubelet[1885]: E0209 10:00:13.468182 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:14.468535 kubelet[1885]: E0209 10:00:14.468488 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:14.828254 systemd[1]: run-containerd-runc-k8s.io-0047e567878f92e85f8390b13b2973f59da53df2d48b2895447eaa97a482ba0c-runc.UyEUzn.mount: Deactivated successfully. Feb 9 10:00:15.468834 kubelet[1885]: E0209 10:00:15.468800 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:16.469047 kubelet[1885]: E0209 10:00:16.469015 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:17.370414 kubelet[1885]: E0209 10:00:17.370382 1885 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:17.391272 env[1360]: time="2024-02-09T10:00:17.391230364Z" level=info msg="StopPodSandbox for \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\"" Feb 9 10:00:17.391616 env[1360]: time="2024-02-09T10:00:17.391340005Z" level=info msg="TearDown network for sandbox \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" successfully" Feb 9 10:00:17.391616 env[1360]: time="2024-02-09T10:00:17.391374566Z" level=info msg="StopPodSandbox for \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" returns successfully" Feb 9 10:00:17.391901 env[1360]: time="2024-02-09T10:00:17.391875772Z" level=info msg="RemovePodSandbox for \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\"" Feb 9 10:00:17.392016 env[1360]: time="2024-02-09T10:00:17.391984493Z" level=info msg="Forcibly stopping sandbox \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\"" Feb 9 10:00:17.392143 env[1360]: time="2024-02-09T10:00:17.392125255Z" level=info msg="TearDown network for sandbox \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" successfully" Feb 9 10:00:17.408939 env[1360]: time="2024-02-09T10:00:17.408895657Z" level=info msg="RemovePodSandbox \"250af6338b082ea47649b1b395dc1312ba1313774e6adc1b3a18d2c9be7c3755\" returns successfully" Feb 9 10:00:17.409565 env[1360]: time="2024-02-09T10:00:17.409531544Z" level=info msg="StopPodSandbox for \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\"" Feb 9 10:00:17.409659 env[1360]: time="2024-02-09T10:00:17.409624786Z" level=info msg="TearDown network for sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" successfully" Feb 9 10:00:17.409688 env[1360]: time="2024-02-09T10:00:17.409658306Z" level=info msg="StopPodSandbox for \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" returns successfully" Feb 9 10:00:17.409947 env[1360]: time="2024-02-09T10:00:17.409920309Z" level=info msg="RemovePodSandbox for \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\"" Feb 9 10:00:17.409998 env[1360]: time="2024-02-09T10:00:17.409951469Z" level=info msg="Forcibly stopping sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\"" Feb 9 10:00:17.410040 env[1360]: time="2024-02-09T10:00:17.410009550Z" level=info msg="TearDown network for sandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" successfully" Feb 9 10:00:17.418354 env[1360]: time="2024-02-09T10:00:17.418311890Z" level=info msg="RemovePodSandbox \"a29cad8c847603020917dfff06b1a68766c8564f2e44698ee5d087e96adf365b\" returns successfully" Feb 9 10:00:17.470303 kubelet[1885]: E0209 10:00:17.470255 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:18.470851 kubelet[1885]: E0209 10:00:18.470816 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:19.471825 kubelet[1885]: E0209 10:00:19.471793 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:19.887823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.905245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.922139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.939500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.957145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.974249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.974443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.994920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:19.995126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.015590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.015771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.035792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.036089 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.055444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.055658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.075563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.075771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.096766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.097012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.116599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.116852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.136437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.136664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.156323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.156577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.175607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.175840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.194836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.195037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.214048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.214280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.233571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.233798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.253194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.253388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.272609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.272804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.291908 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.292089 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.312131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.312351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.331485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.331692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.360398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.360606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.360728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.380122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.380356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.399253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.399461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.418061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.418388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.437308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.437548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.457081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.457371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.472799 kubelet[1885]: E0209 10:00:20.472732 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:20.476486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.476708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.496020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.496244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.515223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.515552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.535626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.535852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.554540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.554769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.574015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.574360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.593508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.593689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.613417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.613636 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.633503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.633729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.652551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.652781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.671634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.671851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.691946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.692154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.711440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.711654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.730675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.730897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.749728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.749966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.769072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.769330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.788535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.788746 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.808074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.808287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.827467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.827678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.837124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.856455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.856675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.885557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.885781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.885892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.895388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.916318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.916584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.936086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.936357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.990954 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:20.991225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.002338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.024420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.057831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.057960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.058072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.058175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.083669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.083918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.106512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.106708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.129094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.129362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.150368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.150600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.171843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.172086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.182499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.193379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.214353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.214611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.235086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.235322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.256158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.256443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.277964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.278206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.298717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.298928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.309393 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.330052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.330272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.351077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.351360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.374710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.374939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.395360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.395602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.416738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.416949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.427425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.448289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.448559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.469710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.469949 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.472884 kubelet[1885]: E0209 10:00:21.472817 1885 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 10:00:21.480332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.501156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.501376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.511869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.535635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.535909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.557350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.557604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.578428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.578679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.588989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.609521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.609703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.631351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.631628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.651738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.652004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.672637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.672869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.682982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.704653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.704852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.715135 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.736708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.736961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.746863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.769834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.770109 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.790743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.790963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.811732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.811950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.832918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.833176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.853869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.854123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.874985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.875234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.895915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.896206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.916622 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.916842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.937644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.937899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.960081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.960286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.981322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:21.981537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.004219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.004427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.025177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.025425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.035050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.044401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.053969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.073883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.074092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 10:00:22.085258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#58 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001