Feb 9 18:37:24.056949 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:37:24.056968 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:37:24.056976 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 18:37:24.056983 kernel: printk: bootconsole [pl11] enabled Feb 9 18:37:24.056987 kernel: efi: EFI v2.70 by EDK II Feb 9 18:37:24.056993 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 18:37:24.056999 kernel: random: crng init done Feb 9 18:37:24.057004 kernel: ACPI: Early table checksum verification disabled Feb 9 18:37:24.057010 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 18:37:24.057015 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057021 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057028 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 18:37:24.057033 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057038 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057045 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057051 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057056 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057064 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057069 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 18:37:24.057075 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:37:24.057080 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 18:37:24.057086 kernel: NUMA: Failed to initialise from firmware Feb 9 18:37:24.057092 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:37:24.057097 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Feb 9 18:37:24.057103 kernel: Zone ranges: Feb 9 18:37:24.057108 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 18:37:24.057114 kernel: DMA32 empty Feb 9 18:37:24.057121 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:37:24.057126 kernel: Movable zone start for each node Feb 9 18:37:24.057132 kernel: Early memory node ranges Feb 9 18:37:24.057138 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 18:37:24.057143 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 18:37:24.057149 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 18:37:24.057154 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 18:37:24.057160 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 18:37:24.057165 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 18:37:24.057171 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 18:37:24.057176 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 18:37:24.057182 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:37:24.057189 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:37:24.057197 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 18:37:24.057203 kernel: psci: probing for conduit method from ACPI. Feb 9 18:37:24.057209 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:37:24.057215 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:37:24.057222 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 18:37:24.057228 kernel: psci: SMC Calling Convention v1.4 Feb 9 18:37:24.057234 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 18:37:24.057240 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 18:37:24.057246 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:37:24.057252 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:37:24.057258 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 18:37:24.057264 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:37:24.057270 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:37:24.057276 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:37:24.057282 kernel: CPU features: detected: Spectre-BHB Feb 9 18:37:24.057288 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:37:24.057295 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:37:24.057301 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:37:24.057307 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 18:37:24.057313 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 18:37:24.057319 kernel: Policy zone: Normal Feb 9 18:37:24.057326 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:37:24.057333 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:37:24.057339 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:37:24.057345 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:37:24.057351 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:37:24.057359 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 18:37:24.057365 kernel: Memory: 3991932K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202228K reserved, 0K cma-reserved) Feb 9 18:37:24.057371 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:37:24.057377 kernel: trace event string verifier disabled Feb 9 18:37:24.057383 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:37:24.063070 kernel: rcu: RCU event tracing is enabled. Feb 9 18:37:24.063080 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:37:24.063086 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:37:24.063092 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:37:24.063099 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:37:24.063105 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:37:24.063115 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:37:24.063121 kernel: GICv3: 960 SPIs implemented Feb 9 18:37:24.063128 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:37:24.063134 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:37:24.063140 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:37:24.063146 kernel: GICv3: 16 PPIs implemented Feb 9 18:37:24.063152 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 18:37:24.063158 kernel: ITS: No ITS available, not enabling LPIs Feb 9 18:37:24.063164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:37:24.063170 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:37:24.063177 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:37:24.063183 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:37:24.063191 kernel: Console: colour dummy device 80x25 Feb 9 18:37:24.063198 kernel: printk: console [tty1] enabled Feb 9 18:37:24.063204 kernel: ACPI: Core revision 20210730 Feb 9 18:37:24.063211 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:37:24.063217 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:37:24.063223 kernel: LSM: Security Framework initializing Feb 9 18:37:24.063229 kernel: SELinux: Initializing. Feb 9 18:37:24.063236 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:37:24.063242 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:37:24.063250 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 18:37:24.063256 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 18:37:24.063263 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:37:24.063269 kernel: Remapping and enabling EFI services. Feb 9 18:37:24.063275 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:37:24.063281 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:37:24.063288 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 18:37:24.063295 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:37:24.063301 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:37:24.063309 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:37:24.063315 kernel: SMP: Total of 2 processors activated. Feb 9 18:37:24.063322 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:37:24.063328 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 18:37:24.063335 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:37:24.063341 kernel: CPU features: detected: CRC32 instructions Feb 9 18:37:24.063347 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:37:24.063354 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:37:24.063360 kernel: CPU features: detected: Privileged Access Never Feb 9 18:37:24.063368 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:37:24.063374 kernel: alternatives: patching kernel code Feb 9 18:37:24.063396 kernel: devtmpfs: initialized Feb 9 18:37:24.063405 kernel: KASLR enabled Feb 9 18:37:24.063412 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:37:24.063418 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:37:24.063425 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:37:24.063431 kernel: SMBIOS 3.1.0 present. Feb 9 18:37:24.063438 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 18:37:24.063445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:37:24.063453 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:37:24.063460 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:37:24.063466 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:37:24.063473 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:37:24.063480 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Feb 9 18:37:24.063487 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:37:24.063493 kernel: cpuidle: using governor menu Feb 9 18:37:24.063501 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:37:24.063508 kernel: ASID allocator initialised with 32768 entries Feb 9 18:37:24.063515 kernel: ACPI: bus type PCI registered Feb 9 18:37:24.063521 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:37:24.063528 kernel: Serial: AMBA PL011 UART driver Feb 9 18:37:24.063535 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:37:24.063541 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:37:24.063548 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:37:24.063554 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:37:24.063562 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:37:24.063569 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:37:24.063575 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:37:24.063582 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:37:24.063588 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:37:24.063595 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:37:24.063602 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:37:24.063608 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:37:24.063615 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:37:24.063623 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:37:24.063629 kernel: ACPI: Interpreter enabled Feb 9 18:37:24.063636 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:37:24.063642 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:37:24.063649 kernel: printk: console [ttyAMA0] enabled Feb 9 18:37:24.063656 kernel: printk: bootconsole [pl11] disabled Feb 9 18:37:24.063662 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 18:37:24.063669 kernel: iommu: Default domain type: Translated Feb 9 18:37:24.063675 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:37:24.063683 kernel: vgaarb: loaded Feb 9 18:37:24.063690 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:37:24.063697 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:37:24.063703 kernel: PTP clock support registered Feb 9 18:37:24.063710 kernel: Registered efivars operations Feb 9 18:37:24.063716 kernel: No ACPI PMU IRQ for CPU0 Feb 9 18:37:24.063723 kernel: No ACPI PMU IRQ for CPU1 Feb 9 18:37:24.063730 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:37:24.063736 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:37:24.063744 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:37:24.063750 kernel: pnp: PnP ACPI init Feb 9 18:37:24.063757 kernel: pnp: PnP ACPI: found 0 devices Feb 9 18:37:24.063763 kernel: NET: Registered PF_INET protocol family Feb 9 18:37:24.063770 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:37:24.063777 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:37:24.063784 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:37:24.063790 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:37:24.063797 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:37:24.063805 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:37:24.063811 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:37:24.063818 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:37:24.063824 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:37:24.063831 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:37:24.063838 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 18:37:24.063845 kernel: kvm [1]: HYP mode not available Feb 9 18:37:24.063851 kernel: Initialise system trusted keyrings Feb 9 18:37:24.063858 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:37:24.063865 kernel: Key type asymmetric registered Feb 9 18:37:24.063872 kernel: Asymmetric key parser 'x509' registered Feb 9 18:37:24.063878 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:37:24.063885 kernel: io scheduler mq-deadline registered Feb 9 18:37:24.063891 kernel: io scheduler kyber registered Feb 9 18:37:24.063898 kernel: io scheduler bfq registered Feb 9 18:37:24.063904 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:37:24.063911 kernel: thunder_xcv, ver 1.0 Feb 9 18:37:24.063918 kernel: thunder_bgx, ver 1.0 Feb 9 18:37:24.063925 kernel: nicpf, ver 1.0 Feb 9 18:37:24.063932 kernel: nicvf, ver 1.0 Feb 9 18:37:24.064053 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:37:24.064114 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:37:23 UTC (1707503843) Feb 9 18:37:24.064123 kernel: efifb: probing for efifb Feb 9 18:37:24.064130 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 18:37:24.064137 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 18:37:24.064143 kernel: efifb: scrolling: redraw Feb 9 18:37:24.064152 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 18:37:24.064159 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:37:24.064165 kernel: fb0: EFI VGA frame buffer device Feb 9 18:37:24.064172 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 18:37:24.064178 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:37:24.064185 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:37:24.064191 kernel: Segment Routing with IPv6 Feb 9 18:37:24.064198 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:37:24.064205 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:37:24.064212 kernel: Key type dns_resolver registered Feb 9 18:37:24.064219 kernel: registered taskstats version 1 Feb 9 18:37:24.064230 kernel: Loading compiled-in X.509 certificates Feb 9 18:37:24.064237 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:37:24.064249 kernel: Key type .fscrypt registered Feb 9 18:37:24.064255 kernel: Key type fscrypt-provisioning registered Feb 9 18:37:24.064262 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:37:24.064268 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:37:24.064275 kernel: ima: No architecture policies found Feb 9 18:37:24.064283 kernel: Freeing unused kernel memory: 34688K Feb 9 18:37:24.064289 kernel: Run /init as init process Feb 9 18:37:24.064296 kernel: with arguments: Feb 9 18:37:24.064302 kernel: /init Feb 9 18:37:24.064309 kernel: with environment: Feb 9 18:37:24.064315 kernel: HOME=/ Feb 9 18:37:24.064321 kernel: TERM=linux Feb 9 18:37:24.064328 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:37:24.064336 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:37:24.064347 systemd[1]: Detected virtualization microsoft. Feb 9 18:37:24.064354 systemd[1]: Detected architecture arm64. Feb 9 18:37:24.064361 systemd[1]: Running in initrd. Feb 9 18:37:24.064367 systemd[1]: No hostname configured, using default hostname. Feb 9 18:37:24.064374 systemd[1]: Hostname set to . Feb 9 18:37:24.064382 systemd[1]: Initializing machine ID from random generator. Feb 9 18:37:24.064397 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:37:24.064405 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:37:24.064412 systemd[1]: Reached target cryptsetup.target. Feb 9 18:37:24.064419 systemd[1]: Reached target paths.target. Feb 9 18:37:24.064426 systemd[1]: Reached target slices.target. Feb 9 18:37:24.064433 systemd[1]: Reached target swap.target. Feb 9 18:37:24.064440 systemd[1]: Reached target timers.target. Feb 9 18:37:24.064448 systemd[1]: Listening on iscsid.socket. Feb 9 18:37:24.064455 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:37:24.064464 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:37:24.064471 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:37:24.064478 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:37:24.064485 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:37:24.064492 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:37:24.064499 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:37:24.064506 systemd[1]: Reached target sockets.target. Feb 9 18:37:24.064513 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:37:24.064520 systemd[1]: Finished network-cleanup.service. Feb 9 18:37:24.064529 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:37:24.064536 systemd[1]: Starting systemd-journald.service... Feb 9 18:37:24.064543 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:37:24.064550 systemd[1]: Starting systemd-resolved.service... Feb 9 18:37:24.064557 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:37:24.064568 systemd-journald[276]: Journal started Feb 9 18:37:24.064609 systemd-journald[276]: Runtime Journal (/run/log/journal/8926d54972f649f3bf50dfc60ce372c7) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:37:24.046009 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 18:37:24.087496 systemd-resolved[278]: Positive Trust Anchors: Feb 9 18:37:24.119175 systemd[1]: Started systemd-journald.service. Feb 9 18:37:24.119198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:37:24.119209 kernel: Bridge firewalling registered Feb 9 18:37:24.087512 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:37:24.172813 kernel: audit: type=1130 audit(1707503844.128:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.172836 kernel: SCSI subsystem initialized Feb 9 18:37:24.172845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:37:24.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.087539 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:37:24.257458 kernel: audit: type=1130 audit(1707503844.156:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.257480 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:37:24.257489 kernel: audit: type=1130 audit(1707503844.201:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.257498 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:37:24.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.089612 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 18:37:24.290426 kernel: audit: type=1130 audit(1707503844.267:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.111549 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 18:37:24.128823 systemd[1]: Started systemd-resolved.service. Feb 9 18:37:24.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.156698 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:37:24.201710 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:37:24.370158 kernel: audit: type=1130 audit(1707503844.295:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.370181 kernel: audit: type=1130 audit(1707503844.325:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.266993 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 18:37:24.267785 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:37:24.295782 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:37:24.325703 systemd[1]: Reached target nss-lookup.target. Feb 9 18:37:24.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.335617 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:37:24.363374 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:37:24.472232 kernel: audit: type=1130 audit(1707503844.415:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.472252 kernel: audit: type=1130 audit(1707503844.438:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.384141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:37:24.502436 kernel: audit: type=1130 audit(1707503844.467:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.391088 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:37:24.507474 dracut-cmdline[298]: dracut-dracut-053 Feb 9 18:37:24.507474 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=t Feb 9 18:37:24.507474 dracut-cmdline[298]: tyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:37:24.415763 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:37:24.439478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:37:24.572012 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:37:24.468512 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:37:24.583406 kernel: iscsi: registered transport (tcp) Feb 9 18:37:24.603577 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:37:24.603597 kernel: QLogic iSCSI HBA Driver Feb 9 18:37:24.633207 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:37:24.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.639039 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:37:24.693408 kernel: raid6: neonx8 gen() 13825 MB/s Feb 9 18:37:24.714399 kernel: raid6: neonx8 xor() 10832 MB/s Feb 9 18:37:24.735396 kernel: raid6: neonx4 gen() 13558 MB/s Feb 9 18:37:24.757398 kernel: raid6: neonx4 xor() 11306 MB/s Feb 9 18:37:24.778396 kernel: raid6: neonx2 gen() 12975 MB/s Feb 9 18:37:24.799396 kernel: raid6: neonx2 xor() 10270 MB/s Feb 9 18:37:24.821397 kernel: raid6: neonx1 gen() 10491 MB/s Feb 9 18:37:24.842396 kernel: raid6: neonx1 xor() 8784 MB/s Feb 9 18:37:24.863396 kernel: raid6: int64x8 gen() 6297 MB/s Feb 9 18:37:24.888400 kernel: raid6: int64x8 xor() 3547 MB/s Feb 9 18:37:24.909398 kernel: raid6: int64x4 gen() 7230 MB/s Feb 9 18:37:24.930396 kernel: raid6: int64x4 xor() 3856 MB/s Feb 9 18:37:24.952401 kernel: raid6: int64x2 gen() 6155 MB/s Feb 9 18:37:24.973396 kernel: raid6: int64x2 xor() 3322 MB/s Feb 9 18:37:24.994395 kernel: raid6: int64x1 gen() 5046 MB/s Feb 9 18:37:25.021475 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 18:37:25.021494 kernel: raid6: using algorithm neonx8 gen() 13825 MB/s Feb 9 18:37:25.021510 kernel: raid6: .... xor() 10832 MB/s, rmw enabled Feb 9 18:37:25.026383 kernel: raid6: using neon recovery algorithm Feb 9 18:37:25.049437 kernel: xor: measuring software checksum speed Feb 9 18:37:25.049449 kernel: 8regs : 17308 MB/sec Feb 9 18:37:25.054477 kernel: 32regs : 20755 MB/sec Feb 9 18:37:25.060433 kernel: arm64_neon : 27968 MB/sec Feb 9 18:37:25.060443 kernel: xor: using function: arm64_neon (27968 MB/sec) Feb 9 18:37:25.122402 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:37:25.130886 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:37:25.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:25.140000 audit: BPF prog-id=7 op=LOAD Feb 9 18:37:25.140000 audit: BPF prog-id=8 op=LOAD Feb 9 18:37:25.141191 systemd[1]: Starting systemd-udevd.service... Feb 9 18:37:25.157379 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 18:37:25.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:25.163675 systemd[1]: Started systemd-udevd.service. Feb 9 18:37:25.176088 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:37:25.190223 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 18:37:25.219014 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:37:25.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:25.226424 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:37:25.269518 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:37:25.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:25.324453 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 18:37:25.343417 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 18:37:25.343465 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 18:37:25.367410 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 9 18:37:25.367461 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 9 18:37:25.367472 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 18:37:25.383404 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 18:37:25.383449 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 18:37:25.392404 kernel: scsi host0: storvsc_host_t Feb 9 18:37:25.392577 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 18:37:25.405407 kernel: scsi host1: storvsc_host_t Feb 9 18:37:25.405587 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 18:37:25.439203 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 18:37:25.439422 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:37:25.446693 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 18:37:25.451885 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 18:37:25.452017 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 18:37:25.458413 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 18:37:25.458584 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 18:37:25.458671 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 18:37:25.473407 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:37:25.479981 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 18:37:25.494472 kernel: hv_netvsc 002248b6-6c39-0022-48b6-6c39002248b6 eth0: VF slot 1 added Feb 9 18:37:25.502406 kernel: hv_vmbus: registering driver hv_pci Feb 9 18:37:25.516994 kernel: hv_pci 76c5dac3-2fa6-46ac-a8f0-59c05733b6d2: PCI VMBus probing: Using version 0x10004 Feb 9 18:37:25.532685 kernel: hv_pci 76c5dac3-2fa6-46ac-a8f0-59c05733b6d2: PCI host bridge to bus 2fa6:00 Feb 9 18:37:25.532851 kernel: pci_bus 2fa6:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 18:37:25.541006 kernel: pci_bus 2fa6:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 18:37:25.551669 kernel: pci 2fa6:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 18:37:25.565706 kernel: pci 2fa6:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:37:25.590540 kernel: pci 2fa6:00:02.0: enabling Extended Tags Feb 9 18:37:25.612424 kernel: pci 2fa6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2fa6:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 18:37:25.626776 kernel: pci_bus 2fa6:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 18:37:25.626949 kernel: pci 2fa6:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:37:25.669410 kernel: mlx5_core 2fa6:00:02.0: firmware version: 16.30.1284 Feb 9 18:37:25.815013 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:37:25.838413 kernel: mlx5_core 2fa6:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 18:37:25.847410 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (541) Feb 9 18:37:25.861605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:37:25.917424 kernel: hv_netvsc 002248b6-6c39-0022-48b6-6c39002248b6 eth0: VF registering: eth1 Feb 9 18:37:25.923443 kernel: mlx5_core 2fa6:00:02.0 eth1: joined to eth0 Feb 9 18:37:25.936424 kernel: mlx5_core 2fa6:00:02.0 enP12198s1: renamed from eth1 Feb 9 18:37:25.981860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:37:25.989164 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:37:25.998240 systemd[1]: Starting disk-uuid.service... Feb 9 18:37:26.038307 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:37:27.041416 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:37:27.041714 disk-uuid[602]: The operation has completed successfully. Feb 9 18:37:27.091551 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:37:27.096531 systemd[1]: Finished disk-uuid.service. Feb 9 18:37:27.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.115107 systemd[1]: Starting verity-setup.service... Feb 9 18:37:27.155506 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:37:27.320759 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:37:27.331710 systemd[1]: Finished verity-setup.service. Feb 9 18:37:27.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.341986 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:37:27.401422 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:37:27.401961 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:37:27.406762 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:37:27.407566 systemd[1]: Starting ignition-setup.service... Feb 9 18:37:27.416218 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:37:27.458209 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:37:27.458259 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:37:27.463444 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:37:27.507766 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:37:27.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.517000 audit: BPF prog-id=9 op=LOAD Feb 9 18:37:27.518817 systemd[1]: Starting systemd-networkd.service... Feb 9 18:37:27.545875 systemd-networkd[871]: lo: Link UP Feb 9 18:37:27.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.545889 systemd-networkd[871]: lo: Gained carrier Feb 9 18:37:27.546289 systemd-networkd[871]: Enumeration completed Feb 9 18:37:27.546959 systemd-networkd[871]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:37:27.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.550366 systemd[1]: Started systemd-networkd.service. Feb 9 18:37:27.556054 systemd[1]: Reached target network.target. Feb 9 18:37:27.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.561952 systemd[1]: Starting iscsiuio.service... Feb 9 18:37:27.620361 iscsid[881]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:37:27.620361 iscsid[881]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 18:37:27.620361 iscsid[881]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:37:27.620361 iscsid[881]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:37:27.620361 iscsid[881]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:37:27.620361 iscsid[881]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:37:27.620361 iscsid[881]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:37:27.768262 kernel: mlx5_core 2fa6:00:02.0 enP12198s1: Link up Feb 9 18:37:27.768447 kernel: hv_netvsc 002248b6-6c39-0022-48b6-6c39002248b6 eth0: Data path switched to VF: enP12198s1 Feb 9 18:37:27.768536 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:37:27.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.577006 systemd[1]: Started iscsiuio.service. Feb 9 18:37:27.582720 systemd[1]: Starting iscsid.service... Feb 9 18:37:27.596167 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:37:27.596566 systemd[1]: Started iscsid.service. Feb 9 18:37:27.605720 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:37:27.643251 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:37:27.649539 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:37:27.657496 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:37:27.681518 systemd[1]: Reached target remote-fs.target. Feb 9 18:37:27.702612 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:37:27.731821 systemd-networkd[871]: enP12198s1: Link UP Feb 9 18:37:27.731923 systemd-networkd[871]: eth0: Link UP Feb 9 18:37:27.732040 systemd-networkd[871]: eth0: Gained carrier Feb 9 18:37:27.748711 systemd-networkd[871]: enP12198s1: Gained carrier Feb 9 18:37:27.762350 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:37:27.777517 systemd-networkd[871]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:37:27.866574 systemd[1]: Finished ignition-setup.service. Feb 9 18:37:27.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:27.872429 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:37:29.786635 systemd-networkd[871]: eth0: Gained IPv6LL Feb 9 18:37:30.162997 ignition[896]: Ignition 2.14.0 Feb 9 18:37:30.166686 ignition[896]: Stage: fetch-offline Feb 9 18:37:30.166771 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:37:30.166799 ignition[896]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:37:30.274522 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:37:30.274702 ignition[896]: parsed url from cmdline: "" Feb 9 18:37:30.274706 ignition[896]: no config URL provided Feb 9 18:37:30.274711 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:37:30.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.289978 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:37:30.337462 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 18:37:30.337485 kernel: audit: type=1130 audit(1707503850.299:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.274719 ignition[896]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:37:30.311717 systemd[1]: Starting ignition-fetch.service... Feb 9 18:37:30.274724 ignition[896]: failed to fetch config: resource requires networking Feb 9 18:37:30.274941 ignition[896]: Ignition finished successfully Feb 9 18:37:30.318506 ignition[902]: Ignition 2.14.0 Feb 9 18:37:30.318513 ignition[902]: Stage: fetch Feb 9 18:37:30.318614 ignition[902]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:37:30.318634 ignition[902]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:37:30.323288 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:37:30.323430 ignition[902]: parsed url from cmdline: "" Feb 9 18:37:30.323434 ignition[902]: no config URL provided Feb 9 18:37:30.323439 ignition[902]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:37:30.323447 ignition[902]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:37:30.323482 ignition[902]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 18:37:30.423485 ignition[902]: GET result: OK Feb 9 18:37:30.423553 ignition[902]: config has been read from IMDS userdata Feb 9 18:37:30.423585 ignition[902]: parsing config with SHA512: f641afad8aa5fe0dc38cade149da6bfea2c5f2a3d1239ff5b0ce3eff31f4a7503029a369e9a16d9480410af1a314da302d75b7cc03362a062076c8d61b0da84b Feb 9 18:37:30.447629 unknown[902]: fetched base config from "system" Feb 9 18:37:30.448093 ignition[902]: fetch: fetch complete Feb 9 18:37:30.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.447637 unknown[902]: fetched base config from "system" Feb 9 18:37:30.493739 kernel: audit: type=1130 audit(1707503850.459:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.448098 ignition[902]: fetch: fetch passed Feb 9 18:37:30.447642 unknown[902]: fetched user config from "azure" Feb 9 18:37:30.537587 kernel: audit: type=1130 audit(1707503850.507:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.448135 ignition[902]: Ignition finished successfully Feb 9 18:37:30.454120 systemd[1]: Finished ignition-fetch.service. Feb 9 18:37:30.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.495322 ignition[909]: Ignition 2.14.0 Feb 9 18:37:30.483343 systemd[1]: Starting ignition-kargs.service... Feb 9 18:37:30.587758 kernel: audit: type=1130 audit(1707503850.547:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.495329 ignition[909]: Stage: kargs Feb 9 18:37:30.502521 systemd[1]: Finished ignition-kargs.service. Feb 9 18:37:30.495445 ignition[909]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:37:30.526349 systemd[1]: Starting ignition-disks.service... Feb 9 18:37:30.495463 ignition[909]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:37:30.539267 systemd[1]: Finished ignition-disks.service. Feb 9 18:37:30.498172 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:37:30.573140 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:37:30.500189 ignition[909]: kargs: kargs passed Feb 9 18:37:30.582656 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:37:30.500238 ignition[909]: Ignition finished successfully Feb 9 18:37:30.593537 systemd[1]: Reached target local-fs.target. Feb 9 18:37:30.532710 ignition[915]: Ignition 2.14.0 Feb 9 18:37:30.602098 systemd[1]: Reached target sysinit.target. Feb 9 18:37:30.532716 ignition[915]: Stage: disks Feb 9 18:37:30.611841 systemd[1]: Reached target basic.target. Feb 9 18:37:30.532904 ignition[915]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:37:30.629040 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:37:30.532924 ignition[915]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:37:30.535959 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:37:30.538354 ignition[915]: disks: disks passed Feb 9 18:37:30.538435 ignition[915]: Ignition finished successfully Feb 9 18:37:30.747172 systemd-fsck[923]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 18:37:30.761003 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:37:30.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.791922 systemd[1]: Mounting sysroot.mount... Feb 9 18:37:30.800717 kernel: audit: type=1130 audit(1707503850.765:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.815415 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:37:30.815462 systemd[1]: Mounted sysroot.mount. Feb 9 18:37:30.819746 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:37:30.852214 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:37:30.857066 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 18:37:30.865428 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:37:30.865468 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:37:30.877602 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:37:30.946315 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:37:30.957363 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:37:30.990393 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (933) Feb 9 18:37:30.990729 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:37:30.990740 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:37:30.990749 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:37:31.006708 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:37:31.011506 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:37:31.028758 initrd-setup-root[964]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:37:31.038543 initrd-setup-root[972]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:37:31.060957 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:37:31.403177 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:37:31.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:31.430185 systemd[1]: Starting ignition-mount.service... Feb 9 18:37:31.441724 kernel: audit: type=1130 audit(1707503851.408:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:31.441983 systemd[1]: Starting sysroot-boot.service... Feb 9 18:37:31.447000 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:37:31.447101 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:37:31.466220 ignition[999]: INFO : Ignition 2.14.0 Feb 9 18:37:31.466220 ignition[999]: INFO : Stage: mount Feb 9 18:37:31.475881 ignition[999]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:37:31.475881 ignition[999]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:37:31.475881 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:37:31.475881 ignition[999]: INFO : mount: mount passed Feb 9 18:37:31.475881 ignition[999]: INFO : Ignition finished successfully Feb 9 18:37:31.543004 kernel: audit: type=1130 audit(1707503851.489:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:31.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:31.475922 systemd[1]: Finished ignition-mount.service. Feb 9 18:37:31.548604 systemd[1]: Finished sysroot-boot.service. Feb 9 18:37:31.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:31.575409 kernel: audit: type=1130 audit(1707503851.553:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:32.051729 coreos-metadata[932]: Feb 09 18:37:32.051 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 18:37:32.062957 coreos-metadata[932]: Feb 09 18:37:32.062 INFO Fetch successful Feb 9 18:37:32.100045 coreos-metadata[932]: Feb 09 18:37:32.100 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 18:37:32.123632 coreos-metadata[932]: Feb 09 18:37:32.123 INFO Fetch successful Feb 9 18:37:32.130442 coreos-metadata[932]: Feb 09 18:37:32.125 INFO wrote hostname ci-3510.3.2-a-aae0fbc2cf to /sysroot/etc/hostname Feb 9 18:37:32.163704 kernel: audit: type=1130 audit(1707503852.135:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:32.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:32.129528 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 18:37:32.136146 systemd[1]: Starting ignition-files.service... Feb 9 18:37:32.178615 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:37:32.198421 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1011) Feb 9 18:37:32.217196 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:37:32.217228 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:37:32.217237 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:37:32.226559 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:37:32.243990 ignition[1030]: INFO : Ignition 2.14.0 Feb 9 18:37:32.250415 ignition[1030]: INFO : Stage: files Feb 9 18:37:32.250415 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:37:32.250415 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:37:32.277170 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:37:32.277170 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:37:32.277170 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:37:32.277170 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:37:32.349203 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:37:32.357802 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:37:32.357802 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:37:32.357802 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:37:32.357802 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 18:37:32.349662 unknown[1030]: wrote ssh authorized keys file for user: core Feb 9 18:37:32.820501 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:37:33.115610 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 18:37:33.134243 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:37:33.134243 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:37:33.134243 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:37:33.470957 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:37:33.605550 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 18:37:33.624402 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:37:33.624402 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:37:33.624402 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:37:33.784954 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:37:34.075342 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 45b3100984c979ba0f1c0df8f4211474c2d75ebe916e677dff5fc8e3b3697cf7a953da94e356f39684cc860dff6878b772b7514c55651c2f866d9efeef23f970 Feb 9 18:37:34.094438 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:37:34.094438 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:37:34.094438 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:37:34.134575 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:37:34.779797 ignition[1030]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 71857ff499ae135fa478e1827a0ed8865e578a8d2b1e25876e914fd0beba03733801c0654bcd4c0567bafeb16887dafb2dbbe8d1116e6ea28dcd8366c142d348 Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:37:34.806888 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:37:34.935475 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1035) Feb 9 18:37:34.856929 systemd[1]: mnt-oem544597052.mount: Deactivated successfully. Feb 9 18:37:34.967979 kernel: audit: type=1130 audit(1707503854.940:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:34.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem544597052" Feb 9 18:37:34.968042 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem544597052": device or resource busy Feb 9 18:37:34.968042 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem544597052", trying btrfs: device or resource busy Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem544597052" Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem544597052" Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem544597052" Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem544597052" Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3279163818" Feb 9 18:37:34.968042 ignition[1030]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3279163818": device or resource busy Feb 9 18:37:34.968042 ignition[1030]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3279163818", trying btrfs: device or resource busy Feb 9 18:37:34.968042 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3279163818" Feb 9 18:37:35.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:34.914074 systemd[1]: mnt-oem3279163818.mount: Deactivated successfully. Feb 9 18:37:35.205636 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3279163818" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem3279163818" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem3279163818" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:37:35.205636 ignition[1030]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:37:35.505638 kernel: kauditd_printk_skb: 6 callbacks suppressed Feb 9 18:37:35.505666 kernel: audit: type=1131 audit(1707503855.313:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:34.925849 systemd[1]: Finished ignition-files.service. Feb 9 18:37:35.514510 ignition[1030]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:37:35.514510 ignition[1030]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Feb 9 18:37:35.514510 ignition[1030]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Feb 9 18:37:35.514510 ignition[1030]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Feb 9 18:37:35.514510 ignition[1030]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:37:35.514510 ignition[1030]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:37:35.514510 ignition[1030]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:37:35.514510 ignition[1030]: INFO : files: files passed Feb 9 18:37:35.514510 ignition[1030]: INFO : Ignition finished successfully Feb 9 18:37:35.726124 kernel: audit: type=1131 audit(1707503855.519:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.726149 kernel: audit: type=1131 audit(1707503855.567:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.726159 kernel: audit: type=1131 audit(1707503855.603:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.726175 kernel: audit: type=1131 audit(1707503855.636:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.726184 kernel: audit: type=1131 audit(1707503855.668:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.726314 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:37:34.941735 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:37:35.745834 iscsid[881]: iscsid shutting down. Feb 9 18:37:34.968161 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:37:35.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:34.981866 systemd[1]: Starting ignition-quench.service... Feb 9 18:37:35.824540 kernel: audit: type=1131 audit(1707503855.760:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.824569 kernel: audit: type=1131 audit(1707503855.799:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.016997 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:37:35.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.856405 kernel: audit: type=1131 audit(1707503855.828:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.047312 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:37:35.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.047468 systemd[1]: Finished ignition-quench.service. Feb 9 18:37:35.902586 kernel: audit: type=1131 audit(1707503855.860:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.062499 systemd[1]: Reached target ignition-complete.target. Feb 9 18:37:35.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.912355 ignition[1068]: INFO : Ignition 2.14.0 Feb 9 18:37:35.912355 ignition[1068]: INFO : Stage: umount Feb 9 18:37:35.912355 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:37:35.912355 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:37:35.912355 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:37:35.912355 ignition[1068]: INFO : umount: umount passed Feb 9 18:37:35.912355 ignition[1068]: INFO : Ignition finished successfully Feb 9 18:37:35.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.079998 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:37:35.110447 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:37:35.110546 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:37:36.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.126818 systemd[1]: Reached target initrd-fs.target. Feb 9 18:37:36.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:36.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.144720 systemd[1]: Reached target initrd.target. Feb 9 18:37:35.163586 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:37:35.164543 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:37:35.219682 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:37:35.236268 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:37:35.265207 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:37:35.275901 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:37:35.289027 systemd[1]: Stopped target timers.target. Feb 9 18:37:35.301175 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:37:36.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.301289 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:37:35.313859 systemd[1]: Stopped target initrd.target. Feb 9 18:37:36.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:36.116000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:37:35.353107 systemd[1]: Stopped target basic.target. Feb 9 18:37:35.371546 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:37:36.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.390006 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:37:36.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.403663 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:37:36.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.417761 systemd[1]: Stopped target remote-fs.target. Feb 9 18:37:35.437502 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:37:35.455344 systemd[1]: Stopped target sysinit.target. Feb 9 18:37:36.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.467896 systemd[1]: Stopped target local-fs.target. Feb 9 18:37:35.482054 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:37:35.496597 systemd[1]: Stopped target swap.target. Feb 9 18:37:36.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.510085 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:37:36.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.510191 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:37:36.254526 kernel: hv_netvsc 002248b6-6c39-0022-48b6-6c39002248b6 eth0: Data path switched from VF: enP12198s1 Feb 9 18:37:36.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.519524 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:37:35.553734 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:37:35.553843 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:37:35.567668 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:37:36.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.567762 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:37:36.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.603472 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:37:36.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.603559 systemd[1]: Stopped ignition-files.service. Feb 9 18:37:36.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:36.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.636354 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 18:37:35.636453 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 18:37:35.698345 systemd[1]: Stopping ignition-mount.service... Feb 9 18:37:35.727797 systemd[1]: Stopping iscsid.service... Feb 9 18:37:35.745343 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:37:35.755655 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:37:35.755850 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:37:35.781039 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:37:35.781229 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:37:35.803751 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:37:35.804588 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:37:36.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.804701 systemd[1]: Stopped iscsid.service. Feb 9 18:37:35.829839 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:37:35.829926 systemd[1]: Stopped ignition-mount.service. Feb 9 18:37:35.861638 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:37:35.861744 systemd[1]: Stopped ignition-disks.service. Feb 9 18:37:35.891647 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:37:35.891741 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:37:35.896972 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:37:35.897055 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:37:35.907383 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:37:35.907549 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:37:35.917732 systemd[1]: Stopped target paths.target. Feb 9 18:37:36.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.926805 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:37:35.930416 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:37:36.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:35.936608 systemd[1]: Stopped target slices.target. Feb 9 18:37:35.949234 systemd[1]: Stopped target sockets.target. Feb 9 18:37:35.968487 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:37:35.968613 systemd[1]: Closed iscsid.socket. Feb 9 18:37:35.981519 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:37:35.981652 systemd[1]: Stopped ignition-setup.service. Feb 9 18:37:35.991866 systemd[1]: Stopping iscsiuio.service... Feb 9 18:37:36.535128 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 18:37:36.008069 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:37:36.008175 systemd[1]: Stopped iscsiuio.service. Feb 9 18:37:36.018945 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:37:36.019034 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:37:36.029698 systemd[1]: Stopped target network.target. Feb 9 18:37:36.043486 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:37:36.043521 systemd[1]: Closed iscsiuio.socket. Feb 9 18:37:36.055461 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:37:36.064532 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:37:36.088091 systemd-networkd[871]: eth0: DHCPv6 lease lost Feb 9 18:37:36.535000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:37:36.089592 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:37:36.089698 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:37:36.099626 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:37:36.099741 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:37:36.116772 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:37:36.116810 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:37:36.126538 systemd[1]: Stopping network-cleanup.service... Feb 9 18:37:36.135019 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:37:36.135078 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:37:36.140550 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:37:36.140594 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:37:36.155240 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:37:36.155282 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:37:36.160717 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:37:36.176464 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:37:36.179072 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:37:36.179219 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:37:36.188893 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:37:36.188935 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:37:36.198503 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:37:36.198537 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:37:36.207688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:37:36.207731 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:37:36.217077 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:37:36.217114 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:37:36.237433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:37:36.237483 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:37:36.253507 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:37:36.270881 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:37:36.270948 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:37:36.289623 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:37:36.289671 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:37:36.295250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:37:36.295289 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:37:36.307587 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:37:36.308109 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:37:36.308205 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:37:36.378173 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:37:36.378288 systemd[1]: Stopped network-cleanup.service. Feb 9 18:37:36.448683 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:37:36.448785 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:37:36.458545 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:37:36.468167 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:37:36.468222 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:37:36.480264 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:37:36.499640 systemd[1]: Switching root. Feb 9 18:37:36.536630 systemd-journald[276]: Journal stopped Feb 9 18:37:45.876720 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:37:45.876755 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:37:45.876767 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:37:45.876780 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:37:45.876788 kernel: SELinux: policy capability open_perms=1 Feb 9 18:37:45.876796 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:37:45.876806 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:37:45.876814 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:37:45.876823 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:37:45.876831 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:37:45.876845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:37:45.876855 systemd[1]: Successfully loaded SELinux policy in 240.555ms. Feb 9 18:37:45.876866 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.400ms. Feb 9 18:37:45.876876 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:37:45.876888 systemd[1]: Detected virtualization microsoft. Feb 9 18:37:45.876897 systemd[1]: Detected architecture arm64. Feb 9 18:37:45.876906 systemd[1]: Detected first boot. Feb 9 18:37:45.876916 systemd[1]: Hostname set to . Feb 9 18:37:45.876925 systemd[1]: Initializing machine ID from random generator. Feb 9 18:37:45.876934 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:37:45.876943 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:37:45.876953 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:37:45.876964 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:37:45.876976 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:37:45.876985 kernel: kauditd_printk_skb: 43 callbacks suppressed Feb 9 18:37:45.876994 kernel: audit: type=1334 audit(1707503864.952:91): prog-id=12 op=LOAD Feb 9 18:37:45.877003 kernel: audit: type=1334 audit(1707503864.952:92): prog-id=3 op=UNLOAD Feb 9 18:37:45.877012 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:37:45.877021 kernel: audit: type=1334 audit(1707503864.958:93): prog-id=13 op=LOAD Feb 9 18:37:45.877031 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:37:45.877041 kernel: audit: type=1334 audit(1707503864.966:94): prog-id=14 op=LOAD Feb 9 18:37:45.877051 kernel: audit: type=1334 audit(1707503864.966:95): prog-id=4 op=UNLOAD Feb 9 18:37:45.877060 kernel: audit: type=1334 audit(1707503864.966:96): prog-id=5 op=UNLOAD Feb 9 18:37:45.877069 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:37:45.877079 kernel: audit: type=1131 audit(1707503864.967:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.877089 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:37:45.877099 kernel: audit: type=1334 audit(1707503865.002:98): prog-id=12 op=UNLOAD Feb 9 18:37:45.877108 kernel: audit: type=1130 audit(1707503865.012:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.877118 kernel: audit: type=1131 audit(1707503865.012:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.877127 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:37:45.877139 systemd[1]: Created slice system-getty.slice. Feb 9 18:37:45.877149 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:37:45.877158 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:37:45.877168 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:37:45.877180 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:37:45.877189 systemd[1]: Created slice user.slice. Feb 9 18:37:45.877199 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:37:45.877208 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:37:45.877218 systemd[1]: Set up automount boot.automount. Feb 9 18:37:45.877228 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:37:45.877238 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:37:45.877247 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:37:45.877258 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:37:45.877269 systemd[1]: Reached target integritysetup.target. Feb 9 18:37:45.877279 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:37:45.877289 systemd[1]: Reached target remote-fs.target. Feb 9 18:37:45.877298 systemd[1]: Reached target slices.target. Feb 9 18:37:45.877308 systemd[1]: Reached target swap.target. Feb 9 18:37:45.877317 systemd[1]: Reached target torcx.target. Feb 9 18:37:45.877327 systemd[1]: Reached target veritysetup.target. Feb 9 18:37:45.877338 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:37:45.877348 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:37:45.877357 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:37:45.877367 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:37:45.877377 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:37:45.877420 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:37:45.877432 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:37:45.877442 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:37:45.877452 systemd[1]: Mounting media.mount... Feb 9 18:37:45.877461 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:37:45.877471 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:37:45.877481 systemd[1]: Mounting tmp.mount... Feb 9 18:37:45.877491 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:37:45.877502 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:37:45.877514 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:37:45.877524 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:37:45.877534 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:37:45.877544 systemd[1]: Starting modprobe@drm.service... Feb 9 18:37:45.877553 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:37:45.877563 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:37:45.877573 systemd[1]: Starting modprobe@loop.service... Feb 9 18:37:45.877583 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:37:45.877593 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:37:45.877604 kernel: fuse: init (API version 7.34) Feb 9 18:37:45.877613 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:37:45.877623 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:37:45.877633 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:37:45.877643 systemd[1]: Stopped systemd-journald.service. Feb 9 18:37:45.877652 kernel: loop: module loaded Feb 9 18:37:45.877662 systemd[1]: systemd-journald.service: Consumed 3.402s CPU time. Feb 9 18:37:45.877671 systemd[1]: Starting systemd-journald.service... Feb 9 18:37:45.877681 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:37:45.877692 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:37:45.877702 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:37:45.877712 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:37:45.877726 systemd-journald[1207]: Journal started Feb 9 18:37:45.877785 systemd-journald[1207]: Runtime Journal (/run/log/journal/cc443b987966407abcbda3f227a25d31) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:37:38.395000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:37:38.992000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:37:38.992000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:37:38.992000 audit: BPF prog-id=10 op=LOAD Feb 9 18:37:38.992000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:37:38.992000 audit: BPF prog-id=11 op=LOAD Feb 9 18:37:38.992000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:37:39.954000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:37:39.954000 audit[1101]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022824 a1=4000028ae0 a2=4000026d00 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:39.954000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:37:39.964000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:37:39.964000 audit[1101]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022909 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:39.964000 audit: CWD cwd="/" Feb 9 18:37:39.964000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:39.964000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:39.964000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:37:44.952000 audit: BPF prog-id=12 op=LOAD Feb 9 18:37:44.952000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:37:44.958000 audit: BPF prog-id=13 op=LOAD Feb 9 18:37:44.966000 audit: BPF prog-id=14 op=LOAD Feb 9 18:37:44.966000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:37:44.966000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:37:44.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.002000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:37:45.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.794000 audit: BPF prog-id=15 op=LOAD Feb 9 18:37:45.794000 audit: BPF prog-id=16 op=LOAD Feb 9 18:37:45.794000 audit: BPF prog-id=17 op=LOAD Feb 9 18:37:45.794000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:37:45.794000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:37:45.869000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:37:45.869000 audit[1207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc8338e90 a2=4000 a3=1 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:45.869000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:37:39.904073 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:37:44.951240 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:37:39.926615 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:37:44.967656 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:37:39.926635 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:37:44.968009 systemd[1]: systemd-journald.service: Consumed 3.402s CPU time. Feb 9 18:37:39.926671 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:37:39.926681 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:37:39.926717 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:37:39.926729 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:37:39.926927 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:37:39.926957 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:37:39.926968 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:37:39.942643 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:37:39.942696 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:37:39.942717 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:37:39.942731 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:37:39.942749 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:37:39.942762 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:37:44.138369 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:37:44.138661 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:37:44.138763 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:37:44.138918 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:37:44.138966 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:37:44.139020 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:37:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:37:45.895691 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:37:45.895741 systemd[1]: Stopped verity-setup.service. Feb 9 18:37:45.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.917314 systemd[1]: Started systemd-journald.service. Feb 9 18:37:45.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.918308 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:37:45.925962 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:37:45.932832 systemd[1]: Mounted media.mount. Feb 9 18:37:45.938932 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:37:45.945525 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:37:45.952565 systemd[1]: Mounted tmp.mount. Feb 9 18:37:45.958872 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:37:45.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.966538 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:37:45.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.976710 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:37:45.976829 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:37:45.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.984253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:37:45.984367 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:37:45.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.991137 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:37:45.991257 systemd[1]: Finished modprobe@drm.service. Feb 9 18:37:45.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:45.997814 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:37:45.997937 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:37:46.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.005603 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:37:46.005728 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:37:46.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.012634 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:37:46.012758 systemd[1]: Finished modprobe@loop.service. Feb 9 18:37:46.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.020190 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:37:46.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.028823 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:37:46.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.037374 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:37:46.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.044608 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:37:46.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.052541 systemd[1]: Reached target network-pre.target. Feb 9 18:37:46.060910 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:37:46.068869 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:37:46.074573 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:37:46.076414 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:37:46.083501 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:37:46.089877 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:37:46.091004 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:37:46.097324 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:37:46.098442 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:37:46.105585 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:37:46.112818 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:37:46.121566 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:37:46.129303 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:37:46.138169 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:37:46.158771 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:37:46.166452 systemd-journald[1207]: Time spent on flushing to /var/log/journal/cc443b987966407abcbda3f227a25d31 is 14.126ms for 1100 entries. Feb 9 18:37:46.166452 systemd-journald[1207]: System Journal (/var/log/journal/cc443b987966407abcbda3f227a25d31) is 8.0M, max 2.6G, 2.6G free. Feb 9 18:37:46.243542 systemd-journald[1207]: Received client request to flush runtime journal. Feb 9 18:37:46.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.177936 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:37:46.187316 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:37:46.244494 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:37:46.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.615944 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:37:46.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:46.624639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:37:46.872089 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:37:46.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:47.071451 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:37:47.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:47.080000 audit: BPF prog-id=18 op=LOAD Feb 9 18:37:47.080000 audit: BPF prog-id=19 op=LOAD Feb 9 18:37:47.080000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:37:47.080000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:37:47.081975 systemd[1]: Starting systemd-udevd.service... Feb 9 18:37:47.102490 systemd-udevd[1226]: Using default interface naming scheme 'v252'. Feb 9 18:37:47.249682 systemd[1]: Started systemd-udevd.service. Feb 9 18:37:47.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:47.266000 audit: BPF prog-id=20 op=LOAD Feb 9 18:37:47.268100 systemd[1]: Starting systemd-networkd.service... Feb 9 18:37:47.290347 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:37:47.346864 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:37:47.349410 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:37:47.345000 audit: BPF prog-id=21 op=LOAD Feb 9 18:37:47.345000 audit: BPF prog-id=22 op=LOAD Feb 9 18:37:47.345000 audit: BPF prog-id=23 op=LOAD Feb 9 18:37:47.391450 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 18:37:47.410054 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 18:37:47.410185 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 18:37:47.410218 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 18:37:47.429493 kernel: hv_vmbus: registering driver hv_utils Feb 9 18:37:47.427440 systemd[1]: Started systemd-userdbd.service. Feb 9 18:37:47.448795 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 18:37:47.448907 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 18:37:47.461058 kernel: Console: switching to colour dummy device 80x25 Feb 9 18:37:47.461102 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 18:37:47.461131 kernel: hv_vmbus: registering driver hv_balloon Feb 9 18:37:47.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:47.382000 audit[1232]: AVC avc: denied { confidentiality } for pid=1232 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:37:47.653842 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 18:37:47.653938 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:37:47.653956 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 18:37:47.382000 audit[1232]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaea6739b0 a1=aa2c a2=ffff949724b0 a3=aaaaea3ce010 items=12 ppid=1226 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:47.382000 audit: CWD cwd="/" Feb 9 18:37:47.382000 audit: PATH item=0 name=(null) inode=7363 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=1 name=(null) inode=11087 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=2 name=(null) inode=11087 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=3 name=(null) inode=11088 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=4 name=(null) inode=11087 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=5 name=(null) inode=11089 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=6 name=(null) inode=11087 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=7 name=(null) inode=11090 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=8 name=(null) inode=11087 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=9 name=(null) inode=11091 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=10 name=(null) inode=11087 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PATH item=11 name=(null) inode=11092 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:37:47.382000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:37:47.821815 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1239) Feb 9 18:37:47.838532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:37:47.846992 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:37:47.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:47.856007 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:37:47.870172 systemd-networkd[1247]: lo: Link UP Feb 9 18:37:47.870436 systemd-networkd[1247]: lo: Gained carrier Feb 9 18:37:47.870921 systemd-networkd[1247]: Enumeration completed Feb 9 18:37:47.871095 systemd[1]: Started systemd-networkd.service. Feb 9 18:37:47.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:47.879122 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:37:47.897376 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:37:47.947819 kernel: mlx5_core 2fa6:00:02.0 enP12198s1: Link up Feb 9 18:37:47.977795 kernel: hv_netvsc 002248b6-6c39-0022-48b6-6c39002248b6 eth0: Data path switched to VF: enP12198s1 Feb 9 18:37:47.978055 systemd-networkd[1247]: enP12198s1: Link UP Feb 9 18:37:47.978146 systemd-networkd[1247]: eth0: Link UP Feb 9 18:37:47.978154 systemd-networkd[1247]: eth0: Gained carrier Feb 9 18:37:47.983017 systemd-networkd[1247]: enP12198s1: Gained carrier Feb 9 18:37:47.995917 systemd-networkd[1247]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:37:48.065588 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:37:48.105805 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:37:48.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:48.113159 systemd[1]: Reached target cryptsetup.target. Feb 9 18:37:48.120754 systemd[1]: Starting lvm2-activation.service... Feb 9 18:37:48.124757 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:37:48.147647 systemd[1]: Finished lvm2-activation.service. Feb 9 18:37:48.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:48.154293 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:37:48.160399 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:37:48.160424 systemd[1]: Reached target local-fs.target. Feb 9 18:37:48.166519 systemd[1]: Reached target machines.target. Feb 9 18:37:48.175592 systemd[1]: Starting ldconfig.service... Feb 9 18:37:48.190364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:37:48.190427 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:37:48.191565 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:37:48.198659 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:37:48.207350 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:37:48.213605 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:37:48.213672 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:37:48.214835 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:37:48.246007 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:37:48.559644 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1307 (bootctl) Feb 9 18:37:48.560932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:37:48.764374 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:37:48.767153 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:37:48.767324 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:37:48.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:48.902674 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:37:48.903281 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:37:48.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:48.916350 systemd-fsck[1315]: fsck.fat 4.2 (2021-01-31) Feb 9 18:37:48.916350 systemd-fsck[1315]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 18:37:48.918842 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:37:48.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:48.930101 systemd[1]: Mounting boot.mount... Feb 9 18:37:48.943571 systemd[1]: Mounted boot.mount. Feb 9 18:37:48.955919 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:37:48.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:49.930436 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:37:49.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:49.939427 systemd[1]: Starting audit-rules.service... Feb 9 18:37:49.947447 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:37:49.954869 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:37:49.962000 audit: BPF prog-id=24 op=LOAD Feb 9 18:37:49.964419 systemd[1]: Starting systemd-resolved.service... Feb 9 18:37:49.970000 audit: BPF prog-id=25 op=LOAD Feb 9 18:37:49.972208 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:37:49.979876 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:37:49.994910 systemd-networkd[1247]: eth0: Gained IPv6LL Feb 9 18:37:49.997713 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:37:50.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.016338 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:37:50.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.024277 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:37:50.027000 audit[1327]: SYSTEM_BOOT pid=1327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.030719 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:37:50.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.066568 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:37:50.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.095530 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:37:50.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.102769 systemd[1]: Reached target time-set.target. Feb 9 18:37:50.154881 systemd-resolved[1325]: Positive Trust Anchors: Feb 9 18:37:50.154894 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:37:50.154921 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:37:50.182107 systemd-resolved[1325]: Using system hostname 'ci-3510.3.2-a-aae0fbc2cf'. Feb 9 18:37:50.184014 systemd[1]: Started systemd-resolved.service. Feb 9 18:37:50.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.192881 systemd[1]: Reached target network.target. Feb 9 18:37:50.199767 kernel: kauditd_printk_skb: 81 callbacks suppressed Feb 9 18:37:50.199855 kernel: audit: type=1130 audit(1707503870.189:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:50.229013 systemd[1]: Reached target network-online.target. Feb 9 18:37:50.235505 systemd[1]: Reached target nss-lookup.target. Feb 9 18:37:50.286000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:37:50.287422 augenrules[1342]: No rules Feb 9 18:37:50.286000 audit[1342]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcce7d3d0 a2=420 a3=0 items=0 ppid=1321 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:50.304495 systemd[1]: Finished audit-rules.service. Feb 9 18:37:50.335677 kernel: audit: type=1305 audit(1707503870.286:166): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:37:50.335789 kernel: audit: type=1300 audit(1707503870.286:166): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcce7d3d0 a2=420 a3=0 items=0 ppid=1321 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:50.335827 kernel: audit: type=1327 audit(1707503870.286:166): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:37:50.286000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:37:50.564061 systemd-timesyncd[1326]: Contacted time server 184.105.182.16:123 (0.flatcar.pool.ntp.org). Feb 9 18:37:50.564454 systemd-timesyncd[1326]: Initial clock synchronization to Fri 2024-02-09 18:37:50.525467 UTC. Feb 9 18:37:54.416691 ldconfig[1306]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:37:54.438954 systemd[1]: Finished ldconfig.service. Feb 9 18:37:54.445242 systemd[1]: Starting systemd-update-done.service... Feb 9 18:37:54.475647 systemd[1]: Finished systemd-update-done.service. Feb 9 18:37:54.481551 systemd[1]: Reached target sysinit.target. Feb 9 18:37:54.489492 systemd[1]: Started motdgen.path. Feb 9 18:37:54.493854 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:37:54.501111 systemd[1]: Started logrotate.timer. Feb 9 18:37:54.505460 systemd[1]: Started mdadm.timer. Feb 9 18:37:54.509491 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:37:54.514767 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:37:54.514801 systemd[1]: Reached target paths.target. Feb 9 18:37:54.519247 systemd[1]: Reached target timers.target. Feb 9 18:37:54.524211 systemd[1]: Listening on dbus.socket. Feb 9 18:37:54.529558 systemd[1]: Starting docker.socket... Feb 9 18:37:54.548488 systemd[1]: Listening on sshd.socket. Feb 9 18:37:54.553176 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:37:54.553633 systemd[1]: Listening on docker.socket. Feb 9 18:37:54.558359 systemd[1]: Reached target sockets.target. Feb 9 18:37:54.563205 systemd[1]: Reached target basic.target. Feb 9 18:37:54.567847 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:37:54.567874 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:37:54.568908 systemd[1]: Starting containerd.service... Feb 9 18:37:54.573860 systemd[1]: Starting dbus.service... Feb 9 18:37:54.578359 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:37:54.583987 systemd[1]: Starting extend-filesystems.service... Feb 9 18:37:54.591216 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:37:54.592194 systemd[1]: Starting motdgen.service... Feb 9 18:37:54.596863 systemd[1]: Started nvidia.service. Feb 9 18:37:54.602408 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:37:54.608217 systemd[1]: Starting prepare-critools.service... Feb 9 18:37:54.614081 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:37:54.620090 systemd[1]: Starting sshd-keygen.service... Feb 9 18:37:54.626576 systemd[1]: Starting systemd-logind.service... Feb 9 18:37:54.631539 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:37:54.631594 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:37:54.631997 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:37:54.632564 systemd[1]: Starting update-engine.service... Feb 9 18:37:54.637926 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:37:54.648504 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:37:54.648666 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:37:54.678394 jq[1371]: true Feb 9 18:37:54.678631 jq[1352]: false Feb 9 18:37:54.680143 extend-filesystems[1353]: Found sda Feb 9 18:37:54.687424 extend-filesystems[1353]: Found sda1 Feb 9 18:37:54.687424 extend-filesystems[1353]: Found sda2 Feb 9 18:37:54.687424 extend-filesystems[1353]: Found sda3 Feb 9 18:37:54.687424 extend-filesystems[1353]: Found usr Feb 9 18:37:54.687424 extend-filesystems[1353]: Found sda4 Feb 9 18:37:54.687424 extend-filesystems[1353]: Found sda6 Feb 9 18:37:54.687424 extend-filesystems[1353]: Found sda7 Feb 9 18:37:54.687424 extend-filesystems[1353]: Found sda9 Feb 9 18:37:54.687424 extend-filesystems[1353]: Checking size of /dev/sda9 Feb 9 18:37:54.764893 tar[1373]: ./ Feb 9 18:37:54.764893 tar[1373]: ./loopback Feb 9 18:37:54.695859 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:37:54.767985 tar[1374]: crictl Feb 9 18:37:54.696067 systemd[1]: Finished motdgen.service. Feb 9 18:37:54.771150 jq[1401]: true Feb 9 18:37:54.721888 systemd-logind[1366]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 9 18:37:54.722139 systemd-logind[1366]: New seat seat0. Feb 9 18:37:54.733920 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:37:54.734071 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:37:54.774624 env[1377]: time="2024-02-09T18:37:54.771995646Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:37:54.827294 env[1377]: time="2024-02-09T18:37:54.827243895Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:37:54.827422 env[1377]: time="2024-02-09T18:37:54.827394316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:37:54.831729 tar[1373]: ./bandwidth Feb 9 18:37:54.832142 env[1377]: time="2024-02-09T18:37:54.830363659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:37:54.832191 env[1377]: time="2024-02-09T18:37:54.832141729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:37:54.832380 env[1377]: time="2024-02-09T18:37:54.832356502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:37:54.832380 env[1377]: time="2024-02-09T18:37:54.832378898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:37:54.832451 env[1377]: time="2024-02-09T18:37:54.832391592Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:37:54.832451 env[1377]: time="2024-02-09T18:37:54.832400694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:37:54.832495 env[1377]: time="2024-02-09T18:37:54.832467761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:37:54.839284 env[1377]: time="2024-02-09T18:37:54.838885537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:37:54.839284 env[1377]: time="2024-02-09T18:37:54.839076558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:37:54.839284 env[1377]: time="2024-02-09T18:37:54.839095560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:37:54.839284 env[1377]: time="2024-02-09T18:37:54.839166339Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:37:54.839284 env[1377]: time="2024-02-09T18:37:54.839179114Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:37:54.841188 extend-filesystems[1353]: Old size kept for /dev/sda9 Feb 9 18:37:54.841188 extend-filesystems[1353]: Found sr0 Feb 9 18:37:54.846716 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:37:54.846915 systemd[1]: Finished extend-filesystems.service. Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869104808Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869151475Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869165248Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869206726Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869221656Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869237305Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869250838Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869612919Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869631961Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869646253Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869658469Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869672880Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869877154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:37:54.884804 env[1377]: time="2024-02-09T18:37:54.869960948Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:37:54.875122 systemd[1]: Started containerd.service. Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870193845Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870219395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870232728Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870284106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870300713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870312530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870325105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870337121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870349297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870360954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870371692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870385146Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870498441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870514209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887618 env[1377]: time="2024-02-09T18:37:54.870525786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.887936 env[1377]: time="2024-02-09T18:37:54.870536405Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:37:54.887936 env[1377]: time="2024-02-09T18:37:54.870550138Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:37:54.887936 env[1377]: time="2024-02-09T18:37:54.870560757Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:37:54.887936 env[1377]: time="2024-02-09T18:37:54.870579759Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:37:54.887936 env[1377]: time="2024-02-09T18:37:54.870615528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.873971504Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.874035936Z" level=info msg="Connect containerd service" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.874068671Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.874654947Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.874969083Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.875007287Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.880502135Z" level=info msg="containerd successfully booted in 0.110748s" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.886556433Z" level=info msg="Start subscribing containerd event" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.886619707Z" level=info msg="Start recovering state" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.886734439Z" level=info msg="Start event monitor" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.886755997Z" level=info msg="Start snapshots syncer" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.886766895Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:37:54.888042 env[1377]: time="2024-02-09T18:37:54.886876597Z" level=info msg="Start streaming server" Feb 9 18:37:54.910922 bash[1424]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:37:54.894651 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 18:37:54.911036 dbus-daemon[1351]: [system] SELinux support is enabled Feb 9 18:37:54.904145 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:37:54.911956 systemd[1]: Started dbus.service. Feb 9 18:37:54.918749 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:37:54.919347 dbus-daemon[1351]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 18:37:54.920514 systemd[1]: Reached target system-config.target. Feb 9 18:37:54.927969 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:37:54.927991 systemd[1]: Reached target user-config.target. Feb 9 18:37:54.935210 systemd[1]: Started systemd-logind.service. Feb 9 18:37:54.966839 tar[1373]: ./ptp Feb 9 18:37:55.056547 tar[1373]: ./vlan Feb 9 18:37:55.128092 tar[1373]: ./host-device Feb 9 18:37:55.190838 tar[1373]: ./tuning Feb 9 18:37:55.247874 tar[1373]: ./vrf Feb 9 18:37:55.253058 update_engine[1368]: I0209 18:37:55.235520 1368 main.cc:92] Flatcar Update Engine starting Feb 9 18:37:55.291281 systemd[1]: Started update-engine.service. Feb 9 18:37:55.297270 update_engine[1368]: I0209 18:37:55.297147 1368 update_check_scheduler.cc:74] Next update check in 4m8s Feb 9 18:37:55.299201 systemd[1]: Started locksmithd.service. Feb 9 18:37:55.310943 tar[1373]: ./sbr Feb 9 18:37:55.330035 systemd[1]: Finished prepare-critools.service. Feb 9 18:37:55.357322 tar[1373]: ./tap Feb 9 18:37:55.391414 tar[1373]: ./dhcp Feb 9 18:37:55.474819 tar[1373]: ./static Feb 9 18:37:55.498984 tar[1373]: ./firewall Feb 9 18:37:55.535266 tar[1373]: ./macvlan Feb 9 18:37:55.568894 tar[1373]: ./dummy Feb 9 18:37:55.601553 tar[1373]: ./bridge Feb 9 18:37:55.637133 tar[1373]: ./ipvlan Feb 9 18:37:55.669561 tar[1373]: ./portmap Feb 9 18:37:55.701053 tar[1373]: ./host-local Feb 9 18:37:55.795884 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:37:56.448156 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:37:57.515828 sshd_keygen[1370]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:37:57.532311 systemd[1]: Finished sshd-keygen.service. Feb 9 18:37:57.538930 systemd[1]: Starting issuegen.service... Feb 9 18:37:57.544531 systemd[1]: Started waagent.service. Feb 9 18:37:57.550139 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:37:57.550299 systemd[1]: Finished issuegen.service. Feb 9 18:37:57.556651 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:37:57.585073 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:37:57.592164 systemd[1]: Started getty@tty1.service. Feb 9 18:37:57.598128 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:37:57.608649 systemd[1]: Reached target getty.target. Feb 9 18:37:57.613524 systemd[1]: Reached target multi-user.target. Feb 9 18:37:57.620022 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:37:57.632214 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:37:57.632385 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:37:57.638535 systemd[1]: Startup finished in 742ms (kernel) + 14.361s (initrd) + 19.450s (userspace) = 34.554s. Feb 9 18:37:58.186982 login[1480]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:37:58.186991 login[1479]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:37:58.207628 systemd[1]: Created slice user-500.slice. Feb 9 18:37:58.208719 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:37:58.211090 systemd-logind[1366]: New session 2 of user core. Feb 9 18:37:58.213989 systemd-logind[1366]: New session 1 of user core. Feb 9 18:37:58.242483 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:37:58.243968 systemd[1]: Starting user@500.service... Feb 9 18:37:58.264281 (systemd)[1483]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:58.452046 systemd[1483]: Queued start job for default target default.target. Feb 9 18:37:58.452533 systemd[1483]: Reached target paths.target. Feb 9 18:37:58.452552 systemd[1483]: Reached target sockets.target. Feb 9 18:37:58.452563 systemd[1483]: Reached target timers.target. Feb 9 18:37:58.452572 systemd[1483]: Reached target basic.target. Feb 9 18:37:58.452670 systemd[1]: Started user@500.service. Feb 9 18:37:58.453546 systemd[1]: Started session-1.scope. Feb 9 18:37:58.454039 systemd[1483]: Reached target default.target. Feb 9 18:37:58.454098 systemd[1483]: Startup finished in 184ms. Feb 9 18:37:58.454113 systemd[1]: Started session-2.scope. Feb 9 18:38:02.724802 waagent[1477]: 2024-02-09T18:38:02.724683Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 18:38:02.743432 waagent[1477]: 2024-02-09T18:38:02.743349Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 18:38:02.748961 waagent[1477]: 2024-02-09T18:38:02.748901Z INFO Daemon Daemon Python: 3.9.16 Feb 9 18:38:02.754562 waagent[1477]: 2024-02-09T18:38:02.754467Z INFO Daemon Daemon Run daemon Feb 9 18:38:02.759512 waagent[1477]: 2024-02-09T18:38:02.759450Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 18:38:02.778411 waagent[1477]: 2024-02-09T18:38:02.778298Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:38:02.795825 waagent[1477]: 2024-02-09T18:38:02.795670Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:38:02.807320 waagent[1477]: 2024-02-09T18:38:02.807249Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:38:02.813387 waagent[1477]: 2024-02-09T18:38:02.813324Z INFO Daemon Daemon Using waagent for provisioning Feb 9 18:38:02.820578 waagent[1477]: 2024-02-09T18:38:02.820513Z INFO Daemon Daemon Activate resource disk Feb 9 18:38:02.826286 waagent[1477]: 2024-02-09T18:38:02.826227Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 18:38:02.842620 waagent[1477]: 2024-02-09T18:38:02.842555Z INFO Daemon Daemon Found device: None Feb 9 18:38:02.848023 waagent[1477]: 2024-02-09T18:38:02.847963Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 18:38:02.858020 waagent[1477]: 2024-02-09T18:38:02.857962Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 18:38:02.872117 waagent[1477]: 2024-02-09T18:38:02.872053Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:38:02.878908 waagent[1477]: 2024-02-09T18:38:02.878849Z INFO Daemon Daemon Running default provisioning handler Feb 9 18:38:02.892798 waagent[1477]: 2024-02-09T18:38:02.892653Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:38:02.910243 waagent[1477]: 2024-02-09T18:38:02.910115Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:38:02.921918 waagent[1477]: 2024-02-09T18:38:02.921850Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:38:02.928056 waagent[1477]: 2024-02-09T18:38:02.927994Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 18:38:03.030741 waagent[1477]: 2024-02-09T18:38:03.030542Z INFO Daemon Daemon Successfully mounted dvd Feb 9 18:38:03.102652 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 18:38:03.132643 waagent[1477]: 2024-02-09T18:38:03.132490Z INFO Daemon Daemon Detect protocol endpoint Feb 9 18:38:03.138447 waagent[1477]: 2024-02-09T18:38:03.138365Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:38:03.145553 waagent[1477]: 2024-02-09T18:38:03.145475Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 18:38:03.153684 waagent[1477]: 2024-02-09T18:38:03.153608Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 18:38:03.160193 waagent[1477]: 2024-02-09T18:38:03.160127Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 18:38:03.166378 waagent[1477]: 2024-02-09T18:38:03.166316Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 18:38:03.277522 waagent[1477]: 2024-02-09T18:38:03.277450Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 18:38:03.285620 waagent[1477]: 2024-02-09T18:38:03.285539Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 18:38:03.291857 waagent[1477]: 2024-02-09T18:38:03.291795Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 18:38:04.042618 waagent[1477]: 2024-02-09T18:38:04.042455Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 18:38:04.061269 waagent[1477]: 2024-02-09T18:38:04.061188Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 18:38:04.067926 waagent[1477]: 2024-02-09T18:38:04.067854Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 18:38:04.144252 waagent[1477]: 2024-02-09T18:38:04.144111Z INFO Daemon Daemon Found private key matching thumbprint 8C1EABDC40AF5467E6A72A4EF9FB7941EDEE7355 Feb 9 18:38:04.153840 waagent[1477]: 2024-02-09T18:38:04.153753Z INFO Daemon Daemon Certificate with thumbprint 436E65152E1903E1F0E81C5A7C2297A46F03D407 has no matching private key. Feb 9 18:38:04.164860 waagent[1477]: 2024-02-09T18:38:04.164757Z INFO Daemon Daemon Fetch goal state completed Feb 9 18:38:04.213944 waagent[1477]: 2024-02-09T18:38:04.213887Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 1d68cf41-d21c-4a93-b61e-5911ca71c530 New eTag: 2332259776827996009] Feb 9 18:38:04.226548 waagent[1477]: 2024-02-09T18:38:04.226467Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:38:04.243625 waagent[1477]: 2024-02-09T18:38:04.243560Z INFO Daemon Daemon Starting provisioning Feb 9 18:38:04.249307 waagent[1477]: 2024-02-09T18:38:04.249228Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 18:38:04.254844 waagent[1477]: 2024-02-09T18:38:04.254763Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-aae0fbc2cf] Feb 9 18:38:04.291271 waagent[1477]: 2024-02-09T18:38:04.291132Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-aae0fbc2cf] Feb 9 18:38:04.300346 waagent[1477]: 2024-02-09T18:38:04.300225Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 18:38:04.308049 waagent[1477]: 2024-02-09T18:38:04.307971Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 18:38:04.324547 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 18:38:04.324706 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 18:38:04.324764 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 18:38:04.325003 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:38:04.328841 systemd-networkd[1247]: eth0: DHCPv6 lease lost Feb 9 18:38:04.330425 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:38:04.330615 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:38:04.332855 systemd[1]: Starting systemd-networkd.service... Feb 9 18:38:04.359844 systemd-networkd[1528]: enP12198s1: Link UP Feb 9 18:38:04.359852 systemd-networkd[1528]: enP12198s1: Gained carrier Feb 9 18:38:04.360714 systemd-networkd[1528]: eth0: Link UP Feb 9 18:38:04.360725 systemd-networkd[1528]: eth0: Gained carrier Feb 9 18:38:04.361264 systemd-networkd[1528]: lo: Link UP Feb 9 18:38:04.361275 systemd-networkd[1528]: lo: Gained carrier Feb 9 18:38:04.361510 systemd-networkd[1528]: eth0: Gained IPv6LL Feb 9 18:38:04.361710 systemd-networkd[1528]: Enumeration completed Feb 9 18:38:04.362283 systemd[1]: Started systemd-networkd.service. Feb 9 18:38:04.363137 systemd-networkd[1528]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:38:04.364074 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:38:04.367135 waagent[1477]: 2024-02-09T18:38:04.366989Z INFO Daemon Daemon Create user account if not exists Feb 9 18:38:04.374594 waagent[1477]: 2024-02-09T18:38:04.374504Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 18:38:04.381511 waagent[1477]: 2024-02-09T18:38:04.381421Z INFO Daemon Daemon Configure sudoer Feb 9 18:38:04.386853 systemd-networkd[1528]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:38:04.388023 waagent[1477]: 2024-02-09T18:38:04.387936Z INFO Daemon Daemon Configure sshd Feb 9 18:38:04.393444 waagent[1477]: 2024-02-09T18:38:04.393363Z INFO Daemon Daemon Deploy ssh public key. Feb 9 18:38:04.399147 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:38:05.631656 waagent[1477]: 2024-02-09T18:38:05.631565Z INFO Daemon Daemon Provisioning complete Feb 9 18:38:05.652767 waagent[1477]: 2024-02-09T18:38:05.652701Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 18:38:05.660425 waagent[1477]: 2024-02-09T18:38:05.660351Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 18:38:05.672630 waagent[1477]: 2024-02-09T18:38:05.672551Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 18:38:05.967103 waagent[1537]: 2024-02-09T18:38:05.966953Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 18:38:05.967790 waagent[1537]: 2024-02-09T18:38:05.967717Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:38:05.967935 waagent[1537]: 2024-02-09T18:38:05.967886Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:38:05.980119 waagent[1537]: 2024-02-09T18:38:05.980047Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 18:38:05.980298 waagent[1537]: 2024-02-09T18:38:05.980247Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 18:38:06.047469 waagent[1537]: 2024-02-09T18:38:06.047335Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8C1EABDC40AF5467E6A72A4EF9FB7941EDEE7355 Feb 9 18:38:06.047682 waagent[1537]: 2024-02-09T18:38:06.047627Z INFO ExtHandler ExtHandler Certificate with thumbprint 436E65152E1903E1F0E81C5A7C2297A46F03D407 has no matching private key. Feb 9 18:38:06.047931 waagent[1537]: 2024-02-09T18:38:06.047880Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 18:38:06.061085 waagent[1537]: 2024-02-09T18:38:06.061029Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 6f456935-f15b-4f83-920a-8aeb70e2da84 New eTag: 2332259776827996009] Feb 9 18:38:06.061671 waagent[1537]: 2024-02-09T18:38:06.061611Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:38:06.121559 waagent[1537]: 2024-02-09T18:38:06.121409Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:38:06.142661 waagent[1537]: 2024-02-09T18:38:06.142574Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1537 Feb 9 18:38:06.146528 waagent[1537]: 2024-02-09T18:38:06.146466Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:38:06.147970 waagent[1537]: 2024-02-09T18:38:06.147911Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:38:06.230406 waagent[1537]: 2024-02-09T18:38:06.230297Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:38:06.230996 waagent[1537]: 2024-02-09T18:38:06.230937Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:38:06.239026 waagent[1537]: 2024-02-09T18:38:06.238973Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:38:06.239679 waagent[1537]: 2024-02-09T18:38:06.239623Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:38:06.240949 waagent[1537]: 2024-02-09T18:38:06.240886Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 18:38:06.242401 waagent[1537]: 2024-02-09T18:38:06.242329Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:38:06.242680 waagent[1537]: 2024-02-09T18:38:06.242612Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:38:06.243245 waagent[1537]: 2024-02-09T18:38:06.243169Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:38:06.243898 waagent[1537]: 2024-02-09T18:38:06.243824Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:38:06.244448 waagent[1537]: 2024-02-09T18:38:06.244382Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:38:06.245334 waagent[1537]: 2024-02-09T18:38:06.245156Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:38:06.245484 waagent[1537]: 2024-02-09T18:38:06.245416Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:38:06.245983 waagent[1537]: 2024-02-09T18:38:06.245904Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:38:06.245983 waagent[1537]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:38:06.245983 waagent[1537]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:38:06.245983 waagent[1537]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:38:06.245983 waagent[1537]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:38:06.245983 waagent[1537]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:38:06.245983 waagent[1537]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:38:06.248229 waagent[1537]: 2024-02-09T18:38:06.248081Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:38:06.248663 waagent[1537]: 2024-02-09T18:38:06.248575Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:38:06.249287 waagent[1537]: 2024-02-09T18:38:06.249202Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:38:06.249726 waagent[1537]: 2024-02-09T18:38:06.249652Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:38:06.250212 waagent[1537]: 2024-02-09T18:38:06.250137Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:38:06.253232 waagent[1537]: 2024-02-09T18:38:06.253160Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:38:06.256213 waagent[1537]: 2024-02-09T18:38:06.256151Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:38:06.257215 waagent[1537]: 2024-02-09T18:38:06.257154Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:38:06.265157 waagent[1537]: 2024-02-09T18:38:06.265102Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 18:38:06.265867 waagent[1537]: 2024-02-09T18:38:06.265818Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:38:06.266877 waagent[1537]: 2024-02-09T18:38:06.266822Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 18:38:06.292878 waagent[1537]: 2024-02-09T18:38:06.292723Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1528' Feb 9 18:38:06.309541 waagent[1537]: 2024-02-09T18:38:06.309475Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 18:38:06.367947 waagent[1537]: 2024-02-09T18:38:06.367803Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:38:06.367947 waagent[1537]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:38:06.367947 waagent[1537]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:38:06.367947 waagent[1537]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:6c:39 brd ff:ff:ff:ff:ff:ff Feb 9 18:38:06.367947 waagent[1537]: 3: enP12198s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:6c:39 brd ff:ff:ff:ff:ff:ff\ altname enP12198p0s2 Feb 9 18:38:06.367947 waagent[1537]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:38:06.367947 waagent[1537]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:38:06.367947 waagent[1537]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:38:06.367947 waagent[1537]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:38:06.367947 waagent[1537]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:38:06.367947 waagent[1537]: 2: eth0 inet6 fe80::222:48ff:feb6:6c39/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:38:06.463174 waagent[1537]: 2024-02-09T18:38:06.463109Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 18:38:06.676159 waagent[1477]: 2024-02-09T18:38:06.676034Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 18:38:06.680138 waagent[1477]: 2024-02-09T18:38:06.680085Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 18:38:07.837797 waagent[1566]: 2024-02-09T18:38:07.837687Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 18:38:07.838530 waagent[1566]: 2024-02-09T18:38:07.838460Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 18:38:07.838663 waagent[1566]: 2024-02-09T18:38:07.838617Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 18:38:07.846588 waagent[1566]: 2024-02-09T18:38:07.846473Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:38:07.847016 waagent[1566]: 2024-02-09T18:38:07.846960Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:38:07.847170 waagent[1566]: 2024-02-09T18:38:07.847123Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:38:07.859828 waagent[1566]: 2024-02-09T18:38:07.859743Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 18:38:07.880958 waagent[1566]: 2024-02-09T18:38:07.880891Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 18:38:07.882055 waagent[1566]: 2024-02-09T18:38:07.881991Z INFO ExtHandler Feb 9 18:38:07.882205 waagent[1566]: 2024-02-09T18:38:07.882157Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0288b23d-8da5-4d10-a9f6-3985ac3163fd eTag: 2332259776827996009 source: Fabric] Feb 9 18:38:07.882973 waagent[1566]: 2024-02-09T18:38:07.882912Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 18:38:07.884188 waagent[1566]: 2024-02-09T18:38:07.884125Z INFO ExtHandler Feb 9 18:38:07.884329 waagent[1566]: 2024-02-09T18:38:07.884280Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 18:38:07.891027 waagent[1566]: 2024-02-09T18:38:07.890976Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 18:38:07.891520 waagent[1566]: 2024-02-09T18:38:07.891469Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:38:07.912317 waagent[1566]: 2024-02-09T18:38:07.912252Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 18:38:07.982486 waagent[1566]: 2024-02-09T18:38:07.982341Z INFO ExtHandler Downloaded certificate {'thumbprint': '8C1EABDC40AF5467E6A72A4EF9FB7941EDEE7355', 'hasPrivateKey': True} Feb 9 18:38:07.983560 waagent[1566]: 2024-02-09T18:38:07.983499Z INFO ExtHandler Downloaded certificate {'thumbprint': '436E65152E1903E1F0E81C5A7C2297A46F03D407', 'hasPrivateKey': False} Feb 9 18:38:07.984593 waagent[1566]: 2024-02-09T18:38:07.984532Z INFO ExtHandler Fetch goal state completed Feb 9 18:38:08.009217 waagent[1566]: 2024-02-09T18:38:08.009146Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1566 Feb 9 18:38:08.012698 waagent[1566]: 2024-02-09T18:38:08.012633Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:38:08.014179 waagent[1566]: 2024-02-09T18:38:08.014120Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:38:08.019160 waagent[1566]: 2024-02-09T18:38:08.019100Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:38:08.019550 waagent[1566]: 2024-02-09T18:38:08.019490Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:38:08.027118 waagent[1566]: 2024-02-09T18:38:08.027051Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:38:08.027616 waagent[1566]: 2024-02-09T18:38:08.027558Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:38:08.033613 waagent[1566]: 2024-02-09T18:38:08.033501Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 18:38:08.037282 waagent[1566]: 2024-02-09T18:38:08.037220Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 18:38:08.038879 waagent[1566]: 2024-02-09T18:38:08.038803Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:38:08.039543 waagent[1566]: 2024-02-09T18:38:08.039481Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:38:08.039831 waagent[1566]: 2024-02-09T18:38:08.039740Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:38:08.040525 waagent[1566]: 2024-02-09T18:38:08.040467Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:38:08.040939 waagent[1566]: 2024-02-09T18:38:08.040885Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:38:08.040939 waagent[1566]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:38:08.040939 waagent[1566]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:38:08.040939 waagent[1566]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:38:08.040939 waagent[1566]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:38:08.040939 waagent[1566]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:38:08.040939 waagent[1566]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:38:08.043698 waagent[1566]: 2024-02-09T18:38:08.043559Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:38:08.044018 waagent[1566]: 2024-02-09T18:38:08.043951Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:38:08.044530 waagent[1566]: 2024-02-09T18:38:08.044468Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:38:08.044681 waagent[1566]: 2024-02-09T18:38:08.044635Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:38:08.044973 waagent[1566]: 2024-02-09T18:38:08.044903Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:38:08.045511 waagent[1566]: 2024-02-09T18:38:08.044757Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:38:08.046791 waagent[1566]: 2024-02-09T18:38:08.046440Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:38:08.047030 waagent[1566]: 2024-02-09T18:38:08.046967Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:38:08.047752 waagent[1566]: 2024-02-09T18:38:08.047645Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:38:08.049307 waagent[1566]: 2024-02-09T18:38:08.049101Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:38:08.050862 waagent[1566]: 2024-02-09T18:38:08.050769Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:38:08.072166 waagent[1566]: 2024-02-09T18:38:08.072086Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 18:38:08.074146 waagent[1566]: 2024-02-09T18:38:08.074004Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 18:38:08.080499 waagent[1566]: 2024-02-09T18:38:08.080425Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:38:08.080499 waagent[1566]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:38:08.080499 waagent[1566]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:38:08.080499 waagent[1566]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:6c:39 brd ff:ff:ff:ff:ff:ff Feb 9 18:38:08.080499 waagent[1566]: 3: enP12198s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b6:6c:39 brd ff:ff:ff:ff:ff:ff\ altname enP12198p0s2 Feb 9 18:38:08.080499 waagent[1566]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:38:08.080499 waagent[1566]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:38:08.080499 waagent[1566]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:38:08.080499 waagent[1566]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:38:08.080499 waagent[1566]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:38:08.080499 waagent[1566]: 2: eth0 inet6 fe80::222:48ff:feb6:6c39/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:38:08.103769 waagent[1566]: 2024-02-09T18:38:08.103662Z INFO ExtHandler ExtHandler Feb 9 18:38:08.103915 waagent[1566]: 2024-02-09T18:38:08.103850Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2f8307d7-b1fa-4062-8ce6-373fb88b31c7 correlation 52857ad3-5bd9-4fde-99cf-ebea50889ae8 created: 2024-02-09T18:36:47.630875Z] Feb 9 18:38:08.104822 waagent[1566]: 2024-02-09T18:38:08.104734Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 18:38:08.106625 waagent[1566]: 2024-02-09T18:38:08.106567Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 9 18:38:08.126660 waagent[1566]: 2024-02-09T18:38:08.126596Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 18:38:08.144375 waagent[1566]: 2024-02-09T18:38:08.144289Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9615ACF7-5D20-4BE3-A696-008F411421FD;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 18:38:08.293471 waagent[1566]: 2024-02-09T18:38:08.293334Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 18:38:08.293471 waagent[1566]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:38:08.293471 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 18:38:08.293471 waagent[1566]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:38:08.293471 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 18:38:08.293471 waagent[1566]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:38:08.293471 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 18:38:08.293471 waagent[1566]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:38:08.293471 waagent[1566]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:38:08.293471 waagent[1566]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:38:08.300725 waagent[1566]: 2024-02-09T18:38:08.300601Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 18:38:08.300725 waagent[1566]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:38:08.300725 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 18:38:08.300725 waagent[1566]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:38:08.300725 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 18:38:08.300725 waagent[1566]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:38:08.300725 waagent[1566]: pkts bytes target prot opt in out source destination Feb 9 18:38:08.300725 waagent[1566]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:38:08.300725 waagent[1566]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:38:08.300725 waagent[1566]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:38:08.301268 waagent[1566]: 2024-02-09T18:38:08.301211Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 18:38:35.797939 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 18:38:40.120131 update_engine[1368]: I0209 18:38:40.120079 1368 update_attempter.cc:509] Updating boot flags... Feb 9 18:38:54.511221 systemd[1]: Created slice system-sshd.slice. Feb 9 18:38:54.512357 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.12.6:35420.service. Feb 9 18:38:55.077738 sshd[1658]: Accepted publickey for core from 10.200.12.6 port 35420 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:55.090923 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:55.095350 systemd[1]: Started session-3.scope. Feb 9 18:38:55.095830 systemd-logind[1366]: New session 3 of user core. Feb 9 18:38:55.446599 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.12.6:35422.service. Feb 9 18:38:55.865710 sshd[1663]: Accepted publickey for core from 10.200.12.6 port 35422 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:55.867032 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:55.870745 systemd-logind[1366]: New session 4 of user core. Feb 9 18:38:55.871203 systemd[1]: Started session-4.scope. Feb 9 18:38:56.169648 sshd[1663]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:56.171967 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:38:56.172530 systemd-logind[1366]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:38:56.172615 systemd[1]: sshd@1-10.200.20.14:22-10.200.12.6:35422.service: Deactivated successfully. Feb 9 18:38:56.173751 systemd-logind[1366]: Removed session 4. Feb 9 18:38:56.238317 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.12.6:35432.service. Feb 9 18:38:56.651953 sshd[1669]: Accepted publickey for core from 10.200.12.6 port 35432 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:56.653173 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:56.657272 systemd[1]: Started session-5.scope. Feb 9 18:38:56.657562 systemd-logind[1366]: New session 5 of user core. Feb 9 18:38:56.949723 sshd[1669]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:56.952297 systemd[1]: sshd@2-10.200.20.14:22-10.200.12.6:35432.service: Deactivated successfully. Feb 9 18:38:56.952956 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:38:56.953472 systemd-logind[1366]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:38:56.954278 systemd-logind[1366]: Removed session 5. Feb 9 18:38:57.017938 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.12.6:52874.service. Feb 9 18:38:57.437304 sshd[1675]: Accepted publickey for core from 10.200.12.6 port 52874 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:57.438457 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:57.442095 systemd-logind[1366]: New session 6 of user core. Feb 9 18:38:57.442463 systemd[1]: Started session-6.scope. Feb 9 18:38:57.741713 sshd[1675]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:57.744266 systemd[1]: sshd@3-10.200.20.14:22-10.200.12.6:52874.service: Deactivated successfully. Feb 9 18:38:57.744937 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:38:57.745442 systemd-logind[1366]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:38:57.746220 systemd-logind[1366]: Removed session 6. Feb 9 18:38:57.816246 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.12.6:52880.service. Feb 9 18:38:58.264875 sshd[1681]: Accepted publickey for core from 10.200.12.6 port 52880 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:58.266047 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:58.269649 systemd-logind[1366]: New session 7 of user core. Feb 9 18:38:58.270091 systemd[1]: Started session-7.scope. Feb 9 18:38:58.736695 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:38:58.736915 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:38:59.620949 systemd[1]: Reloading. Feb 9 18:38:59.688457 /usr/lib/systemd/system-generators/torcx-generator[1714]: time="2024-02-09T18:38:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:38:59.688487 /usr/lib/systemd/system-generators/torcx-generator[1714]: time="2024-02-09T18:38:59Z" level=info msg="torcx already run" Feb 9 18:38:59.749556 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:38:59.749728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:38:59.766542 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:38:59.850892 systemd[1]: Started kubelet.service. Feb 9 18:38:59.860234 systemd[1]: Starting coreos-metadata.service... Feb 9 18:38:59.897306 coreos-metadata[1779]: Feb 09 18:38:59.897 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 18:38:59.900013 coreos-metadata[1779]: Feb 09 18:38:59.899 INFO Fetch successful Feb 9 18:38:59.900273 coreos-metadata[1779]: Feb 09 18:38:59.900 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 9 18:38:59.902805 coreos-metadata[1779]: Feb 09 18:38:59.902 INFO Fetch successful Feb 9 18:38:59.903144 coreos-metadata[1779]: Feb 09 18:38:59.903 INFO Fetching http://168.63.129.16/machine/76da384f-fa62-4b64-a16a-631afa938cf4/7bff0ccb%2Db9c3%2D4811%2D822d%2D7587a760d67a.%5Fci%2D3510.3.2%2Da%2Daae0fbc2cf?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 9 18:38:59.904892 coreos-metadata[1779]: Feb 09 18:38:59.904 INFO Fetch successful Feb 9 18:38:59.907957 kubelet[1772]: E0209 18:38:59.907900 1772 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:38:59.909426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:38:59.909556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:38:59.939385 coreos-metadata[1779]: Feb 09 18:38:59.939 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 9 18:38:59.952041 coreos-metadata[1779]: Feb 09 18:38:59.952 INFO Fetch successful Feb 9 18:38:59.960005 systemd[1]: Finished coreos-metadata.service. Feb 9 18:39:02.455407 systemd[1]: Stopped kubelet.service. Feb 9 18:39:02.472720 systemd[1]: Reloading. Feb 9 18:39:02.544559 /usr/lib/systemd/system-generators/torcx-generator[1838]: time="2024-02-09T18:39:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:39:02.549843 /usr/lib/systemd/system-generators/torcx-generator[1838]: time="2024-02-09T18:39:02Z" level=info msg="torcx already run" Feb 9 18:39:02.601186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:39:02.601352 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:39:02.618173 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:39:02.699662 systemd[1]: Started kubelet.service. Feb 9 18:39:02.740218 kubelet[1897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:39:02.740218 kubelet[1897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:39:02.740218 kubelet[1897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:39:02.740218 kubelet[1897]: I0209 18:39:02.739833 1897 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:39:03.719677 kubelet[1897]: I0209 18:39:03.719253 1897 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 18:39:03.719677 kubelet[1897]: I0209 18:39:03.719679 1897 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:39:03.720044 kubelet[1897]: I0209 18:39:03.720025 1897 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 18:39:03.726827 kubelet[1897]: W0209 18:39:03.726807 1897 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:39:03.727300 kubelet[1897]: I0209 18:39:03.727269 1897 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:39:03.727553 kubelet[1897]: I0209 18:39:03.727539 1897 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:39:03.727831 kubelet[1897]: I0209 18:39:03.727819 1897 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:39:03.727968 kubelet[1897]: I0209 18:39:03.727955 1897 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:39:03.728106 kubelet[1897]: I0209 18:39:03.728094 1897 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:39:03.728170 kubelet[1897]: I0209 18:39:03.728161 1897 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 18:39:03.728322 kubelet[1897]: I0209 18:39:03.728310 1897 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:39:03.731359 kubelet[1897]: I0209 18:39:03.731334 1897 kubelet.go:405] "Attempting to sync node with API server" Feb 9 18:39:03.731359 kubelet[1897]: I0209 18:39:03.731363 1897 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:39:03.731459 kubelet[1897]: I0209 18:39:03.731386 1897 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:39:03.731459 kubelet[1897]: I0209 18:39:03.731399 1897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:39:03.731846 kubelet[1897]: E0209 18:39:03.731826 1897 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:03.731876 kubelet[1897]: E0209 18:39:03.731862 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:03.732328 kubelet[1897]: I0209 18:39:03.732310 1897 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:39:03.732543 kubelet[1897]: W0209 18:39:03.732524 1897 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:39:03.732965 kubelet[1897]: I0209 18:39:03.732944 1897 server.go:1168] "Started kubelet" Feb 9 18:39:03.733160 kubelet[1897]: I0209 18:39:03.733137 1897 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:39:03.733683 kubelet[1897]: I0209 18:39:03.733655 1897 server.go:461] "Adding debug handlers to kubelet server" Feb 9 18:39:03.742059 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:39:03.742130 kubelet[1897]: I0209 18:39:03.734557 1897 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:39:03.742130 kubelet[1897]: E0209 18:39:03.736781 1897 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:39:03.742130 kubelet[1897]: E0209 18:39:03.736800 1897 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:39:03.742541 kubelet[1897]: I0209 18:39:03.742526 1897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:39:03.746311 kubelet[1897]: E0209 18:39:03.746276 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:03.746311 kubelet[1897]: I0209 18:39:03.746313 1897 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 18:39:03.746416 kubelet[1897]: I0209 18:39:03.746397 1897 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 18:39:03.751582 kubelet[1897]: E0209 18:39:03.751554 1897 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 18:39:03.765763 kubelet[1897]: E0209 18:39:03.765672 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca42c78773", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 732922227, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 732922227, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.766124 kubelet[1897]: W0209 18:39:03.766107 1897 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:03.766236 kubelet[1897]: E0209 18:39:03.766225 1897 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:03.766346 kubelet[1897]: W0209 18:39:03.766333 1897 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:03.766429 kubelet[1897]: E0209 18:39:03.766419 1897 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:39:03.766540 kubelet[1897]: W0209 18:39:03.766529 1897 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:03.766626 kubelet[1897]: E0209 18:39:03.766616 1897 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:39:03.768480 kubelet[1897]: E0209 18:39:03.768416 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca4302962a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 736792618, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 736792618, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.770853 kubelet[1897]: E0209 18:39:03.770797 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdd993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770036627, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770036627, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.771097 kubelet[1897]: I0209 18:39:03.770855 1897 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:39:03.771097 kubelet[1897]: I0209 18:39:03.771094 1897 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:39:03.771187 kubelet[1897]: I0209 18:39:03.771112 1897 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:39:03.771979 kubelet[1897]: E0209 18:39:03.771921 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdec3c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770041404, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770041404, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.772830 kubelet[1897]: E0209 18:39:03.772757 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdf5e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770043873, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770043873, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.776108 kubelet[1897]: I0209 18:39:03.776084 1897 policy_none.go:49] "None policy: Start" Feb 9 18:39:03.776575 kubelet[1897]: I0209 18:39:03.776553 1897 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:39:03.776633 kubelet[1897]: I0209 18:39:03.776581 1897 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:39:03.783884 systemd[1]: Created slice kubepods.slice. Feb 9 18:39:03.785527 kubelet[1897]: I0209 18:39:03.785270 1897 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:39:03.787974 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:39:03.791431 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:39:03.799088 kubelet[1897]: I0209 18:39:03.799067 1897 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:39:03.799388 kubelet[1897]: I0209 18:39:03.799374 1897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:39:03.800985 kubelet[1897]: I0209 18:39:03.800821 1897 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:39:03.800985 kubelet[1897]: I0209 18:39:03.800848 1897 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 18:39:03.800985 kubelet[1897]: I0209 18:39:03.800863 1897 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 18:39:03.800985 kubelet[1897]: E0209 18:39:03.800905 1897 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:39:03.804054 kubelet[1897]: E0209 18:39:03.803986 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca46de7184", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 801532804, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 801532804, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.805039 kubelet[1897]: E0209 18:39:03.805007 1897 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.14\" not found" Feb 9 18:39:03.805509 kubelet[1897]: W0209 18:39:03.805492 1897 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:03.805637 kubelet[1897]: E0209 18:39:03.805625 1897 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:39:03.847276 kubelet[1897]: I0209 18:39:03.847246 1897 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.14" Feb 9 18:39:03.848100 kubelet[1897]: E0209 18:39:03.848080 1897 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.14" Feb 9 18:39:03.848625 kubelet[1897]: E0209 18:39:03.848558 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdd993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770036627, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 847214152, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdd993" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.849235 kubelet[1897]: E0209 18:39:03.849177 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdec3c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770041404, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 847218968, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdec3c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.849986 kubelet[1897]: E0209 18:39:03.849913 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdf5e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770043873, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 847221636, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdf5e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:03.953439 kubelet[1897]: E0209 18:39:03.953403 1897 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 18:39:04.050833 kubelet[1897]: I0209 18:39:04.049655 1897 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.14" Feb 9 18:39:04.051535 kubelet[1897]: E0209 18:39:04.051464 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdd993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770036627, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 49610934, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdd993" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.051979 kubelet[1897]: E0209 18:39:04.051960 1897 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.14" Feb 9 18:39:04.052433 kubelet[1897]: E0209 18:39:04.052378 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdec3c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770041404, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 49625270, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdec3c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.053177 kubelet[1897]: E0209 18:39:04.053125 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdf5e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770043873, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 49628296, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdf5e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.355397 kubelet[1897]: E0209 18:39:04.355374 1897 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.20.14\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 18:39:04.453539 kubelet[1897]: I0209 18:39:04.453509 1897 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.14" Feb 9 18:39:04.454713 kubelet[1897]: E0209 18:39:04.454695 1897 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.14" Feb 9 18:39:04.454843 kubelet[1897]: E0209 18:39:04.454749 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdd993", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.14 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770036627, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 453464789, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdd993" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.455525 kubelet[1897]: E0209 18:39:04.455463 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdec3c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.14 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770041404, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 453482509, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdec3c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.456174 kubelet[1897]: E0209 18:39:04.456118 1897 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.14.17b245ca44fdf5e1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.14", UID:"10.200.20.14", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.14 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.14"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 39, 3, 770043873, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 39, 4, 453485137, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.14.17b245ca44fdf5e1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:39:04.660980 kubelet[1897]: W0209 18:39:04.660892 1897 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.20.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:04.661245 kubelet[1897]: E0209 18:39:04.661108 1897 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.14" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:39:04.726960 kubelet[1897]: I0209 18:39:04.726924 1897 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 18:39:04.732178 kubelet[1897]: E0209 18:39:04.732160 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:05.118688 kubelet[1897]: E0209 18:39:05.118657 1897 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.14" not found Feb 9 18:39:05.158955 kubelet[1897]: E0209 18:39:05.158922 1897 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.14\" not found" node="10.200.20.14" Feb 9 18:39:05.255945 kubelet[1897]: I0209 18:39:05.255924 1897 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.14" Feb 9 18:39:05.259000 kubelet[1897]: I0209 18:39:05.258974 1897 kubelet_node_status.go:73] "Successfully registered node" node="10.200.20.14" Feb 9 18:39:05.271554 kubelet[1897]: E0209 18:39:05.271529 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:05.372357 kubelet[1897]: E0209 18:39:05.372260 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:05.472639 kubelet[1897]: E0209 18:39:05.472613 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:05.573267 kubelet[1897]: E0209 18:39:05.573232 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:05.673676 kubelet[1897]: E0209 18:39:05.673592 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:05.714568 sudo[1684]: pam_unix(sudo:session): session closed for user root Feb 9 18:39:05.732944 kubelet[1897]: E0209 18:39:05.732912 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:05.774347 kubelet[1897]: E0209 18:39:05.774323 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:05.808901 sshd[1681]: pam_unix(sshd:session): session closed for user core Feb 9 18:39:05.811494 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:39:05.812131 systemd-logind[1366]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:39:05.812225 systemd[1]: sshd@4-10.200.20.14:22-10.200.12.6:52880.service: Deactivated successfully. Feb 9 18:39:05.813215 systemd-logind[1366]: Removed session 7. Feb 9 18:39:05.875021 kubelet[1897]: E0209 18:39:05.874981 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:05.975472 kubelet[1897]: E0209 18:39:05.975379 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.075807 kubelet[1897]: E0209 18:39:06.075765 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.176221 kubelet[1897]: E0209 18:39:06.176200 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.276722 kubelet[1897]: E0209 18:39:06.276642 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.377047 kubelet[1897]: E0209 18:39:06.377023 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.477430 kubelet[1897]: E0209 18:39:06.477411 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.577835 kubelet[1897]: E0209 18:39:06.577738 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.678167 kubelet[1897]: E0209 18:39:06.678147 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.733483 kubelet[1897]: E0209 18:39:06.733461 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:06.779022 kubelet[1897]: E0209 18:39:06.779000 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.879671 kubelet[1897]: E0209 18:39:06.879649 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:06.980078 kubelet[1897]: E0209 18:39:06.980056 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.080420 kubelet[1897]: E0209 18:39:07.080401 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.180835 kubelet[1897]: E0209 18:39:07.180764 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.281328 kubelet[1897]: E0209 18:39:07.281306 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.381743 kubelet[1897]: E0209 18:39:07.381710 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.482133 kubelet[1897]: E0209 18:39:07.482070 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.582488 kubelet[1897]: E0209 18:39:07.582463 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.682829 kubelet[1897]: E0209 18:39:07.682811 1897 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.20.14\" not found" Feb 9 18:39:07.733921 kubelet[1897]: E0209 18:39:07.733830 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:07.783931 kubelet[1897]: I0209 18:39:07.783904 1897 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 18:39:07.784241 env[1377]: time="2024-02-09T18:39:07.784199996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:39:07.784611 kubelet[1897]: I0209 18:39:07.784596 1897 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 18:39:08.734428 kubelet[1897]: E0209 18:39:08.734397 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:08.734767 kubelet[1897]: I0209 18:39:08.734455 1897 apiserver.go:52] "Watching apiserver" Feb 9 18:39:08.736901 kubelet[1897]: I0209 18:39:08.736876 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:39:08.736998 kubelet[1897]: I0209 18:39:08.736983 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:39:08.741873 systemd[1]: Created slice kubepods-besteffort-podd5fc46ef_8971_4f70_9a6b_7df35e70a1cb.slice. Feb 9 18:39:08.747700 kubelet[1897]: I0209 18:39:08.747672 1897 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 18:39:08.751616 systemd[1]: Created slice kubepods-burstable-pod359db0a9_cf87_4f6c_9039_82fcc37ad555.slice. Feb 9 18:39:08.769194 kubelet[1897]: I0209 18:39:08.768464 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-hostproc\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769194 kubelet[1897]: I0209 18:39:08.768541 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-config-path\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769194 kubelet[1897]: I0209 18:39:08.768563 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5fc46ef-8971-4f70-9a6b-7df35e70a1cb-xtables-lock\") pod \"kube-proxy-jfvp6\" (UID: \"d5fc46ef-8971-4f70-9a6b-7df35e70a1cb\") " pod="kube-system/kube-proxy-jfvp6" Feb 9 18:39:08.769194 kubelet[1897]: I0209 18:39:08.768610 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7clxf\" (UniqueName: \"kubernetes.io/projected/d5fc46ef-8971-4f70-9a6b-7df35e70a1cb-kube-api-access-7clxf\") pod \"kube-proxy-jfvp6\" (UID: \"d5fc46ef-8971-4f70-9a6b-7df35e70a1cb\") " pod="kube-system/kube-proxy-jfvp6" Feb 9 18:39:08.769194 kubelet[1897]: I0209 18:39:08.768634 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-hubble-tls\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769385 kubelet[1897]: I0209 18:39:08.768683 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtvz\" (UniqueName: \"kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-kube-api-access-dxtvz\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769385 kubelet[1897]: I0209 18:39:08.768705 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5fc46ef-8971-4f70-9a6b-7df35e70a1cb-lib-modules\") pod \"kube-proxy-jfvp6\" (UID: \"d5fc46ef-8971-4f70-9a6b-7df35e70a1cb\") " pod="kube-system/kube-proxy-jfvp6" Feb 9 18:39:08.769385 kubelet[1897]: I0209 18:39:08.768723 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-run\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769385 kubelet[1897]: I0209 18:39:08.768764 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-xtables-lock\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769385 kubelet[1897]: I0209 18:39:08.768802 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/359db0a9-cf87-4f6c-9039-82fcc37ad555-clustermesh-secrets\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769504 kubelet[1897]: I0209 18:39:08.768821 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-net\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769504 kubelet[1897]: I0209 18:39:08.768876 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-bpf-maps\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769504 kubelet[1897]: I0209 18:39:08.768904 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-cgroup\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769504 kubelet[1897]: I0209 18:39:08.768952 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cni-path\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769504 kubelet[1897]: I0209 18:39:08.768972 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-etc-cni-netd\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769504 kubelet[1897]: I0209 18:39:08.768989 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-lib-modules\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769659 kubelet[1897]: I0209 18:39:08.769031 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-kernel\") pod \"cilium-8p5pf\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " pod="kube-system/cilium-8p5pf" Feb 9 18:39:08.769659 kubelet[1897]: I0209 18:39:08.769053 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5fc46ef-8971-4f70-9a6b-7df35e70a1cb-kube-proxy\") pod \"kube-proxy-jfvp6\" (UID: \"d5fc46ef-8971-4f70-9a6b-7df35e70a1cb\") " pod="kube-system/kube-proxy-jfvp6" Feb 9 18:39:08.769659 kubelet[1897]: I0209 18:39:08.769066 1897 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:39:09.051456 env[1377]: time="2024-02-09T18:39:09.050511165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfvp6,Uid:d5fc46ef-8971-4f70-9a6b-7df35e70a1cb,Namespace:kube-system,Attempt:0,}" Feb 9 18:39:09.063189 env[1377]: time="2024-02-09T18:39:09.062950784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8p5pf,Uid:359db0a9-cf87-4f6c-9039-82fcc37ad555,Namespace:kube-system,Attempt:0,}" Feb 9 18:39:09.735086 kubelet[1897]: E0209 18:39:09.735054 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:10.010035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1534919654.mount: Deactivated successfully. Feb 9 18:39:10.035864 env[1377]: time="2024-02-09T18:39:10.035799320Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.038457 env[1377]: time="2024-02-09T18:39:10.038424933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.046573 env[1377]: time="2024-02-09T18:39:10.046535818Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.052851 env[1377]: time="2024-02-09T18:39:10.052825176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.055938 env[1377]: time="2024-02-09T18:39:10.055911543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.062203 env[1377]: time="2024-02-09T18:39:10.062176460Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.074261 env[1377]: time="2024-02-09T18:39:10.074232343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.075021 env[1377]: time="2024-02-09T18:39:10.074991415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:10.144948 env[1377]: time="2024-02-09T18:39:10.143705749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:10.144948 env[1377]: time="2024-02-09T18:39:10.143738483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:10.144948 env[1377]: time="2024-02-09T18:39:10.143748292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:10.144948 env[1377]: time="2024-02-09T18:39:10.144204341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a51c03fbcb851978138fe688806f9c1d568da9a557eac3305b97e71346f5e98 pid=1947 runtime=io.containerd.runc.v2 Feb 9 18:39:10.145917 env[1377]: time="2024-02-09T18:39:10.144277306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:10.145917 env[1377]: time="2024-02-09T18:39:10.144307090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:10.145917 env[1377]: time="2024-02-09T18:39:10.144316978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:10.145917 env[1377]: time="2024-02-09T18:39:10.144503576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d pid=1951 runtime=io.containerd.runc.v2 Feb 9 18:39:10.159546 systemd[1]: Started cri-containerd-75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d.scope. Feb 9 18:39:10.163607 systemd[1]: Started cri-containerd-0a51c03fbcb851978138fe688806f9c1d568da9a557eac3305b97e71346f5e98.scope. Feb 9 18:39:10.193380 env[1377]: time="2024-02-09T18:39:10.193332954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8p5pf,Uid:359db0a9-cf87-4f6c-9039-82fcc37ad555,Namespace:kube-system,Attempt:0,} returns sandbox id \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\"" Feb 9 18:39:10.195517 env[1377]: time="2024-02-09T18:39:10.195484854Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:39:10.201430 env[1377]: time="2024-02-09T18:39:10.201400578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jfvp6,Uid:d5fc46ef-8971-4f70-9a6b-7df35e70a1cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a51c03fbcb851978138fe688806f9c1d568da9a557eac3305b97e71346f5e98\"" Feb 9 18:39:10.735991 kubelet[1897]: E0209 18:39:10.735957 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:11.736488 kubelet[1897]: E0209 18:39:11.736441 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:12.737421 kubelet[1897]: E0209 18:39:12.737354 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:13.738326 kubelet[1897]: E0209 18:39:13.738290 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:14.739021 kubelet[1897]: E0209 18:39:14.738976 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:15.246956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492340705.mount: Deactivated successfully. Feb 9 18:39:15.739125 kubelet[1897]: E0209 18:39:15.739074 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:16.740022 kubelet[1897]: E0209 18:39:16.739989 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:17.385829 env[1377]: time="2024-02-09T18:39:17.385781975Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:17.393325 env[1377]: time="2024-02-09T18:39:17.393295919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:17.397824 env[1377]: time="2024-02-09T18:39:17.397799584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:17.398395 env[1377]: time="2024-02-09T18:39:17.398364602Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:39:17.399986 env[1377]: time="2024-02-09T18:39:17.399950541Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 18:39:17.401203 env[1377]: time="2024-02-09T18:39:17.401173132Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:39:17.422568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4226508442.mount: Deactivated successfully. Feb 9 18:39:17.426145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039630506.mount: Deactivated successfully. Feb 9 18:39:17.443853 env[1377]: time="2024-02-09T18:39:17.443811055Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\"" Feb 9 18:39:17.444664 env[1377]: time="2024-02-09T18:39:17.444634417Z" level=info msg="StartContainer for \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\"" Feb 9 18:39:17.459788 systemd[1]: Started cri-containerd-f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b.scope. Feb 9 18:39:17.488835 env[1377]: time="2024-02-09T18:39:17.488795778Z" level=info msg="StartContainer for \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\" returns successfully" Feb 9 18:39:17.496171 systemd[1]: cri-containerd-f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b.scope: Deactivated successfully. Feb 9 18:39:17.740832 kubelet[1897]: E0209 18:39:17.740722 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:18.421121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b-rootfs.mount: Deactivated successfully. Feb 9 18:39:18.741473 kubelet[1897]: E0209 18:39:18.741200 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:19.333322 env[1377]: time="2024-02-09T18:39:19.333117869Z" level=info msg="shim disconnected" id=f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b Feb 9 18:39:19.333322 env[1377]: time="2024-02-09T18:39:19.333181501Z" level=warning msg="cleaning up after shim disconnected" id=f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b namespace=k8s.io Feb 9 18:39:19.333322 env[1377]: time="2024-02-09T18:39:19.333192718Z" level=info msg="cleaning up dead shim" Feb 9 18:39:19.340439 env[1377]: time="2024-02-09T18:39:19.340404691Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2073 runtime=io.containerd.runc.v2\n" Feb 9 18:39:19.742331 kubelet[1897]: E0209 18:39:19.742294 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:19.836381 env[1377]: time="2024-02-09T18:39:19.836344216Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:39:19.859988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412018211.mount: Deactivated successfully. Feb 9 18:39:19.862764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281800539.mount: Deactivated successfully. Feb 9 18:39:19.878730 env[1377]: time="2024-02-09T18:39:19.878684101Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\"" Feb 9 18:39:19.879232 env[1377]: time="2024-02-09T18:39:19.879206524Z" level=info msg="StartContainer for \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\"" Feb 9 18:39:19.899730 systemd[1]: Started cri-containerd-fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1.scope. Feb 9 18:39:19.935221 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:39:19.935426 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:39:19.935579 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:39:19.937436 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:39:19.942270 env[1377]: time="2024-02-09T18:39:19.942026825Z" level=info msg="StartContainer for \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\" returns successfully" Feb 9 18:39:19.942925 systemd[1]: cri-containerd-fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1.scope: Deactivated successfully. Feb 9 18:39:19.948458 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:39:19.991540 env[1377]: time="2024-02-09T18:39:19.991483436Z" level=info msg="shim disconnected" id=fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1 Feb 9 18:39:19.991726 env[1377]: time="2024-02-09T18:39:19.991546189Z" level=warning msg="cleaning up after shim disconnected" id=fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1 namespace=k8s.io Feb 9 18:39:19.991726 env[1377]: time="2024-02-09T18:39:19.991559322Z" level=info msg="cleaning up dead shim" Feb 9 18:39:20.016399 env[1377]: time="2024-02-09T18:39:20.013406538Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2137 runtime=io.containerd.runc.v2\n" Feb 9 18:39:20.669567 env[1377]: time="2024-02-09T18:39:20.669516701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:20.673847 env[1377]: time="2024-02-09T18:39:20.673814497Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:20.676853 env[1377]: time="2024-02-09T18:39:20.676813557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:20.679722 env[1377]: time="2024-02-09T18:39:20.679684903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:20.680196 env[1377]: time="2024-02-09T18:39:20.680170168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:f17f9528c5073692925255c3de3f310109480873912e8b5ddc171ae1e64324ef\"" Feb 9 18:39:20.682998 env[1377]: time="2024-02-09T18:39:20.682965461Z" level=info msg="CreateContainer within sandbox \"0a51c03fbcb851978138fe688806f9c1d568da9a557eac3305b97e71346f5e98\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:39:20.743353 kubelet[1897]: E0209 18:39:20.743318 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:20.776891 env[1377]: time="2024-02-09T18:39:20.776832791Z" level=info msg="CreateContainer within sandbox \"0a51c03fbcb851978138fe688806f9c1d568da9a557eac3305b97e71346f5e98\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8eca61105d0a8b8b7215664ec4fa6ef4251fa6afdab0f7fa28df0f56ad334cfc\"" Feb 9 18:39:20.777725 env[1377]: time="2024-02-09T18:39:20.777699600Z" level=info msg="StartContainer for \"8eca61105d0a8b8b7215664ec4fa6ef4251fa6afdab0f7fa28df0f56ad334cfc\"" Feb 9 18:39:20.793053 systemd[1]: Started cri-containerd-8eca61105d0a8b8b7215664ec4fa6ef4251fa6afdab0f7fa28df0f56ad334cfc.scope. Feb 9 18:39:20.824003 env[1377]: time="2024-02-09T18:39:20.823951100Z" level=info msg="StartContainer for \"8eca61105d0a8b8b7215664ec4fa6ef4251fa6afdab0f7fa28df0f56ad334cfc\" returns successfully" Feb 9 18:39:20.843136 env[1377]: time="2024-02-09T18:39:20.843089614Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:39:20.857384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1-rootfs.mount: Deactivated successfully. Feb 9 18:39:20.865147 kubelet[1897]: I0209 18:39:20.863391 1897 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jfvp6" podStartSLOduration=5.385395356 podCreationTimestamp="2024-02-09 18:39:05 +0000 UTC" firstStartedPulling="2024-02-09 18:39:10.202425552 +0000 UTC m=+7.498872997" lastFinishedPulling="2024-02-09 18:39:20.680375612 +0000 UTC m=+17.976823017" observedRunningTime="2024-02-09 18:39:20.862877438 +0000 UTC m=+18.159324882" watchObservedRunningTime="2024-02-09 18:39:20.863345376 +0000 UTC m=+18.159792821" Feb 9 18:39:20.878971 env[1377]: time="2024-02-09T18:39:20.878931816Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\"" Feb 9 18:39:20.879650 env[1377]: time="2024-02-09T18:39:20.879626477Z" level=info msg="StartContainer for \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\"" Feb 9 18:39:20.899276 systemd[1]: Started cri-containerd-0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea.scope. Feb 9 18:39:20.931531 systemd[1]: cri-containerd-0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea.scope: Deactivated successfully. Feb 9 18:39:20.937530 env[1377]: time="2024-02-09T18:39:20.937493750Z" level=info msg="StartContainer for \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\" returns successfully" Feb 9 18:39:21.336787 env[1377]: time="2024-02-09T18:39:21.336668781Z" level=info msg="shim disconnected" id=0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea Feb 9 18:39:21.336787 env[1377]: time="2024-02-09T18:39:21.336708628Z" level=warning msg="cleaning up after shim disconnected" id=0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea namespace=k8s.io Feb 9 18:39:21.336787 env[1377]: time="2024-02-09T18:39:21.336718450Z" level=info msg="cleaning up dead shim" Feb 9 18:39:21.343075 env[1377]: time="2024-02-09T18:39:21.343031446Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2282 runtime=io.containerd.runc.v2\n" Feb 9 18:39:21.743590 kubelet[1897]: E0209 18:39:21.743549 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:21.850470 env[1377]: time="2024-02-09T18:39:21.850426702Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:39:21.856625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea-rootfs.mount: Deactivated successfully. Feb 9 18:39:21.871563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611765296.mount: Deactivated successfully. Feb 9 18:39:21.875074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78069125.mount: Deactivated successfully. Feb 9 18:39:21.887789 env[1377]: time="2024-02-09T18:39:21.887746509Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\"" Feb 9 18:39:21.888221 env[1377]: time="2024-02-09T18:39:21.888196561Z" level=info msg="StartContainer for \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\"" Feb 9 18:39:21.901263 systemd[1]: Started cri-containerd-2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d.scope. Feb 9 18:39:21.929638 systemd[1]: cri-containerd-2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d.scope: Deactivated successfully. Feb 9 18:39:21.931574 env[1377]: time="2024-02-09T18:39:21.931321658Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359db0a9_cf87_4f6c_9039_82fcc37ad555.slice/cri-containerd-2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d.scope/memory.events\": no such file or directory" Feb 9 18:39:21.936820 env[1377]: time="2024-02-09T18:39:21.936756110Z" level=info msg="StartContainer for \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\" returns successfully" Feb 9 18:39:21.968545 env[1377]: time="2024-02-09T18:39:21.968489744Z" level=info msg="shim disconnected" id=2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d Feb 9 18:39:21.968545 env[1377]: time="2024-02-09T18:39:21.968539812Z" level=warning msg="cleaning up after shim disconnected" id=2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d namespace=k8s.io Feb 9 18:39:21.968545 env[1377]: time="2024-02-09T18:39:21.968548596Z" level=info msg="cleaning up dead shim" Feb 9 18:39:21.975193 env[1377]: time="2024-02-09T18:39:21.975151300Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:39:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2413 runtime=io.containerd.runc.v2\n" Feb 9 18:39:22.744626 kubelet[1897]: E0209 18:39:22.744602 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:22.854107 env[1377]: time="2024-02-09T18:39:22.854068655Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:39:22.876863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145558285.mount: Deactivated successfully. Feb 9 18:39:22.881307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709130025.mount: Deactivated successfully. Feb 9 18:39:22.894675 env[1377]: time="2024-02-09T18:39:22.894633933Z" level=info msg="CreateContainer within sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\"" Feb 9 18:39:22.895246 env[1377]: time="2024-02-09T18:39:22.895216191Z" level=info msg="StartContainer for \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\"" Feb 9 18:39:22.908972 systemd[1]: Started cri-containerd-71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7.scope. Feb 9 18:39:22.952461 env[1377]: time="2024-02-09T18:39:22.952403988Z" level=info msg="StartContainer for \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\" returns successfully" Feb 9 18:39:23.040805 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:39:23.118384 kubelet[1897]: I0209 18:39:23.118346 1897 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:39:23.418808 kernel: Initializing XFRM netlink socket Feb 9 18:39:23.428799 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:39:23.731682 kubelet[1897]: E0209 18:39:23.731556 1897 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:23.745940 kubelet[1897]: E0209 18:39:23.745910 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:23.871724 kubelet[1897]: I0209 18:39:23.871696 1897 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8p5pf" podStartSLOduration=11.667238211 podCreationTimestamp="2024-02-09 18:39:05 +0000 UTC" firstStartedPulling="2024-02-09 18:39:10.194988176 +0000 UTC m=+7.491435621" lastFinishedPulling="2024-02-09 18:39:17.399414339 +0000 UTC m=+14.695861863" observedRunningTime="2024-02-09 18:39:23.8710042 +0000 UTC m=+21.167451644" watchObservedRunningTime="2024-02-09 18:39:23.871664453 +0000 UTC m=+21.168111858" Feb 9 18:39:24.746129 kubelet[1897]: E0209 18:39:24.746088 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:25.055594 systemd-networkd[1528]: cilium_host: Link UP Feb 9 18:39:25.055933 systemd-networkd[1528]: cilium_net: Link UP Feb 9 18:39:25.070740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 18:39:25.070882 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:39:25.073096 systemd-networkd[1528]: cilium_net: Gained carrier Feb 9 18:39:25.073225 systemd-networkd[1528]: cilium_host: Gained carrier Feb 9 18:39:25.073335 systemd-networkd[1528]: cilium_net: Gained IPv6LL Feb 9 18:39:25.073439 systemd-networkd[1528]: cilium_host: Gained IPv6LL Feb 9 18:39:25.212953 systemd-networkd[1528]: cilium_vxlan: Link UP Feb 9 18:39:25.212960 systemd-networkd[1528]: cilium_vxlan: Gained carrier Feb 9 18:39:25.453797 kernel: NET: Registered PF_ALG protocol family Feb 9 18:39:25.746956 kubelet[1897]: E0209 18:39:25.746848 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:25.911498 kubelet[1897]: I0209 18:39:25.911468 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:39:25.916023 systemd[1]: Created slice kubepods-besteffort-pod6d506458_cae4_488b_867f_e2db6b46e73f.slice. Feb 9 18:39:25.956016 kubelet[1897]: I0209 18:39:25.955988 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmvbp\" (UniqueName: \"kubernetes.io/projected/6d506458-cae4-488b-867f-e2db6b46e73f-kube-api-access-bmvbp\") pod \"nginx-deployment-845c78c8b9-5b6ww\" (UID: \"6d506458-cae4-488b-867f-e2db6b46e73f\") " pod="default/nginx-deployment-845c78c8b9-5b6ww" Feb 9 18:39:26.090588 systemd-networkd[1528]: lxc_health: Link UP Feb 9 18:39:26.100436 systemd-networkd[1528]: lxc_health: Gained carrier Feb 9 18:39:26.100874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:39:26.219387 env[1377]: time="2024-02-09T18:39:26.219051770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-5b6ww,Uid:6d506458-cae4-488b-867f-e2db6b46e73f,Namespace:default,Attempt:0,}" Feb 9 18:39:26.292708 systemd-networkd[1528]: lxc52e395060e19: Link UP Feb 9 18:39:26.307822 kernel: eth0: renamed from tmp8205f Feb 9 18:39:26.320806 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc52e395060e19: link becomes ready Feb 9 18:39:26.320864 systemd-networkd[1528]: lxc52e395060e19: Gained carrier Feb 9 18:39:26.378923 systemd-networkd[1528]: cilium_vxlan: Gained IPv6LL Feb 9 18:39:26.747620 kubelet[1897]: E0209 18:39:26.747494 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:27.275975 systemd-networkd[1528]: lxc_health: Gained IPv6LL Feb 9 18:39:27.594977 systemd-networkd[1528]: lxc52e395060e19: Gained IPv6LL Feb 9 18:39:27.748263 kubelet[1897]: E0209 18:39:27.748236 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:28.749206 kubelet[1897]: E0209 18:39:28.749173 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:29.749652 kubelet[1897]: E0209 18:39:29.749617 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:29.834310 env[1377]: time="2024-02-09T18:39:29.834226379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:29.834310 env[1377]: time="2024-02-09T18:39:29.834269203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:29.834310 env[1377]: time="2024-02-09T18:39:29.834279430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:29.834882 env[1377]: time="2024-02-09T18:39:29.834835388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8205f794ff43397be8aab6b12a846383bc64192f548324ce00463e962be639a1 pid=2935 runtime=io.containerd.runc.v2 Feb 9 18:39:29.850289 systemd[1]: run-containerd-runc-k8s.io-8205f794ff43397be8aab6b12a846383bc64192f548324ce00463e962be639a1-runc.w8rXez.mount: Deactivated successfully. Feb 9 18:39:29.853396 systemd[1]: Started cri-containerd-8205f794ff43397be8aab6b12a846383bc64192f548324ce00463e962be639a1.scope. Feb 9 18:39:29.885557 env[1377]: time="2024-02-09T18:39:29.885521354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-5b6ww,Uid:6d506458-cae4-488b-867f-e2db6b46e73f,Namespace:default,Attempt:0,} returns sandbox id \"8205f794ff43397be8aab6b12a846383bc64192f548324ce00463e962be639a1\"" Feb 9 18:39:29.889036 env[1377]: time="2024-02-09T18:39:29.888953060Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:39:30.750133 kubelet[1897]: E0209 18:39:30.750079 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:31.751177 kubelet[1897]: E0209 18:39:31.751129 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:32.337369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988102554.mount: Deactivated successfully. Feb 9 18:39:32.752552 kubelet[1897]: E0209 18:39:32.752248 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:33.236302 env[1377]: time="2024-02-09T18:39:33.236258240Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:33.243383 env[1377]: time="2024-02-09T18:39:33.243349443Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:33.248719 env[1377]: time="2024-02-09T18:39:33.248690081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:33.253397 env[1377]: time="2024-02-09T18:39:33.253368338Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:33.254064 env[1377]: time="2024-02-09T18:39:33.254034554Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 18:39:33.256173 env[1377]: time="2024-02-09T18:39:33.256138325Z" level=info msg="CreateContainer within sandbox \"8205f794ff43397be8aab6b12a846383bc64192f548324ce00463e962be639a1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 18:39:33.280174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746047774.mount: Deactivated successfully. Feb 9 18:39:33.286502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817180348.mount: Deactivated successfully. Feb 9 18:39:33.302622 env[1377]: time="2024-02-09T18:39:33.302579876Z" level=info msg="CreateContainer within sandbox \"8205f794ff43397be8aab6b12a846383bc64192f548324ce00463e962be639a1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3673633767fa6cb5db5152efb0c8160b65b8e54794ee4bc78faa91122bd166a0\"" Feb 9 18:39:33.303712 env[1377]: time="2024-02-09T18:39:33.303682485Z" level=info msg="StartContainer for \"3673633767fa6cb5db5152efb0c8160b65b8e54794ee4bc78faa91122bd166a0\"" Feb 9 18:39:33.319861 systemd[1]: Started cri-containerd-3673633767fa6cb5db5152efb0c8160b65b8e54794ee4bc78faa91122bd166a0.scope. Feb 9 18:39:33.345032 env[1377]: time="2024-02-09T18:39:33.344989009Z" level=info msg="StartContainer for \"3673633767fa6cb5db5152efb0c8160b65b8e54794ee4bc78faa91122bd166a0\" returns successfully" Feb 9 18:39:33.423802 kubelet[1897]: I0209 18:39:33.423727 1897 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:39:33.752637 kubelet[1897]: E0209 18:39:33.752603 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:33.883339 kubelet[1897]: I0209 18:39:33.883245 1897 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-5b6ww" podStartSLOduration=5.515189248 podCreationTimestamp="2024-02-09 18:39:25 +0000 UTC" firstStartedPulling="2024-02-09 18:39:29.886853026 +0000 UTC m=+27.183300471" lastFinishedPulling="2024-02-09 18:39:33.2548717 +0000 UTC m=+30.551319145" observedRunningTime="2024-02-09 18:39:33.882966072 +0000 UTC m=+31.179413517" watchObservedRunningTime="2024-02-09 18:39:33.883207922 +0000 UTC m=+31.179655367" Feb 9 18:39:34.753283 kubelet[1897]: E0209 18:39:34.753248 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:35.753693 kubelet[1897]: E0209 18:39:35.753665 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:36.754985 kubelet[1897]: E0209 18:39:36.754956 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:37.755742 kubelet[1897]: E0209 18:39:37.755697 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:38.756210 kubelet[1897]: E0209 18:39:38.756171 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:39.756836 kubelet[1897]: E0209 18:39:39.756801 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:40.757702 kubelet[1897]: E0209 18:39:40.757663 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:40.914351 kubelet[1897]: I0209 18:39:40.914320 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:39:40.918720 systemd[1]: Created slice kubepods-besteffort-podafa67674_b3ba_47e9_91b2_bb83727679b9.slice. Feb 9 18:39:41.022621 kubelet[1897]: I0209 18:39:41.022523 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/afa67674-b3ba-47e9-91b2-bb83727679b9-data\") pod \"nfs-server-provisioner-0\" (UID: \"afa67674-b3ba-47e9-91b2-bb83727679b9\") " pod="default/nfs-server-provisioner-0" Feb 9 18:39:41.022872 kubelet[1897]: I0209 18:39:41.022859 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sfl8\" (UniqueName: \"kubernetes.io/projected/afa67674-b3ba-47e9-91b2-bb83727679b9-kube-api-access-4sfl8\") pod \"nfs-server-provisioner-0\" (UID: \"afa67674-b3ba-47e9-91b2-bb83727679b9\") " pod="default/nfs-server-provisioner-0" Feb 9 18:39:41.221669 env[1377]: time="2024-02-09T18:39:41.221626563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:afa67674-b3ba-47e9-91b2-bb83727679b9,Namespace:default,Attempt:0,}" Feb 9 18:39:41.272198 systemd-networkd[1528]: lxc66b794c7c0a1: Link UP Feb 9 18:39:41.285799 kernel: eth0: renamed from tmp10dc0 Feb 9 18:39:41.298072 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:39:41.298181 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc66b794c7c0a1: link becomes ready Feb 9 18:39:41.304144 systemd-networkd[1528]: lxc66b794c7c0a1: Gained carrier Feb 9 18:39:41.534867 env[1377]: time="2024-02-09T18:39:41.534769882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:41.534867 env[1377]: time="2024-02-09T18:39:41.534828391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:41.535072 env[1377]: time="2024-02-09T18:39:41.534838462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:41.535135 env[1377]: time="2024-02-09T18:39:41.535092162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10dc0f1ce6bbe54b1b54dcdc1cf5b1789a301aa85442caa240af19658caa480d pid=3060 runtime=io.containerd.runc.v2 Feb 9 18:39:41.550969 systemd[1]: Started cri-containerd-10dc0f1ce6bbe54b1b54dcdc1cf5b1789a301aa85442caa240af19658caa480d.scope. Feb 9 18:39:41.579833 env[1377]: time="2024-02-09T18:39:41.579796205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:afa67674-b3ba-47e9-91b2-bb83727679b9,Namespace:default,Attempt:0,} returns sandbox id \"10dc0f1ce6bbe54b1b54dcdc1cf5b1789a301aa85442caa240af19658caa480d\"" Feb 9 18:39:41.581663 env[1377]: time="2024-02-09T18:39:41.581631213Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 18:39:41.759133 kubelet[1897]: E0209 18:39:41.759090 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:42.135463 systemd[1]: run-containerd-runc-k8s.io-10dc0f1ce6bbe54b1b54dcdc1cf5b1789a301aa85442caa240af19658caa480d-runc.MZLtKE.mount: Deactivated successfully. Feb 9 18:39:42.759892 kubelet[1897]: E0209 18:39:42.759846 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:43.147034 systemd-networkd[1528]: lxc66b794c7c0a1: Gained IPv6LL Feb 9 18:39:43.731861 kubelet[1897]: E0209 18:39:43.731824 1897 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:43.760246 kubelet[1897]: E0209 18:39:43.760215 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:44.017417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399294889.mount: Deactivated successfully. Feb 9 18:39:44.760691 kubelet[1897]: E0209 18:39:44.760643 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:45.761232 kubelet[1897]: E0209 18:39:45.761180 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:45.899382 env[1377]: time="2024-02-09T18:39:45.899339105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:45.909039 env[1377]: time="2024-02-09T18:39:45.909006404Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:45.915350 env[1377]: time="2024-02-09T18:39:45.915316216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:45.922350 env[1377]: time="2024-02-09T18:39:45.922312770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:45.923085 env[1377]: time="2024-02-09T18:39:45.923051430Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 18:39:45.925534 env[1377]: time="2024-02-09T18:39:45.925491597Z" level=info msg="CreateContainer within sandbox \"10dc0f1ce6bbe54b1b54dcdc1cf5b1789a301aa85442caa240af19658caa480d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 18:39:45.951340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2433768024.mount: Deactivated successfully. Feb 9 18:39:45.956600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049963323.mount: Deactivated successfully. Feb 9 18:39:45.973059 env[1377]: time="2024-02-09T18:39:45.973020245Z" level=info msg="CreateContainer within sandbox \"10dc0f1ce6bbe54b1b54dcdc1cf5b1789a301aa85442caa240af19658caa480d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d489813ab12adae20a99d2963207fed4aa9e53372bd2d94461bc301840c54c4b\"" Feb 9 18:39:45.973864 env[1377]: time="2024-02-09T18:39:45.973829531Z" level=info msg="StartContainer for \"d489813ab12adae20a99d2963207fed4aa9e53372bd2d94461bc301840c54c4b\"" Feb 9 18:39:45.990015 systemd[1]: Started cri-containerd-d489813ab12adae20a99d2963207fed4aa9e53372bd2d94461bc301840c54c4b.scope. Feb 9 18:39:46.025134 env[1377]: time="2024-02-09T18:39:46.024615910Z" level=info msg="StartContainer for \"d489813ab12adae20a99d2963207fed4aa9e53372bd2d94461bc301840c54c4b\" returns successfully" Feb 9 18:39:46.761515 kubelet[1897]: E0209 18:39:46.761481 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:46.923037 kubelet[1897]: I0209 18:39:46.923012 1897 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.580189991 podCreationTimestamp="2024-02-09 18:39:40 +0000 UTC" firstStartedPulling="2024-02-09 18:39:41.581102991 +0000 UTC m=+38.877550396" lastFinishedPulling="2024-02-09 18:39:45.923881819 +0000 UTC m=+43.220329264" observedRunningTime="2024-02-09 18:39:46.922216316 +0000 UTC m=+44.218663760" watchObservedRunningTime="2024-02-09 18:39:46.922968859 +0000 UTC m=+44.219416264" Feb 9 18:39:47.762594 kubelet[1897]: E0209 18:39:47.762561 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:48.763559 kubelet[1897]: E0209 18:39:48.763518 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:49.763705 kubelet[1897]: E0209 18:39:49.763673 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:50.765013 kubelet[1897]: E0209 18:39:50.764971 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:51.766072 kubelet[1897]: E0209 18:39:51.766023 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:52.766892 kubelet[1897]: E0209 18:39:52.766855 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:53.767663 kubelet[1897]: E0209 18:39:53.767636 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:54.768581 kubelet[1897]: E0209 18:39:54.768544 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:55.769614 kubelet[1897]: E0209 18:39:55.769578 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:55.898978 kubelet[1897]: I0209 18:39:55.898948 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:39:55.903308 systemd[1]: Created slice kubepods-besteffort-podcf0acc3e_674e_435e_9a03_bdc564b86bdb.slice. Feb 9 18:39:55.997149 kubelet[1897]: I0209 18:39:55.997123 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-77225b1a-5ce9-408c-b728-4e689f05370d\" (UniqueName: \"kubernetes.io/nfs/cf0acc3e-674e-435e-9a03-bdc564b86bdb-pvc-77225b1a-5ce9-408c-b728-4e689f05370d\") pod \"test-pod-1\" (UID: \"cf0acc3e-674e-435e-9a03-bdc564b86bdb\") " pod="default/test-pod-1" Feb 9 18:39:55.997357 kubelet[1897]: I0209 18:39:55.997343 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g88m\" (UniqueName: \"kubernetes.io/projected/cf0acc3e-674e-435e-9a03-bdc564b86bdb-kube-api-access-7g88m\") pod \"test-pod-1\" (UID: \"cf0acc3e-674e-435e-9a03-bdc564b86bdb\") " pod="default/test-pod-1" Feb 9 18:39:56.269801 kernel: FS-Cache: Loaded Feb 9 18:39:56.327728 kernel: RPC: Registered named UNIX socket transport module. Feb 9 18:39:56.327857 kernel: RPC: Registered udp transport module. Feb 9 18:39:56.337896 kernel: RPC: Registered tcp transport module. Feb 9 18:39:56.337951 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 18:39:56.410861 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 18:39:56.572837 kernel: NFS: Registering the id_resolver key type Feb 9 18:39:56.572966 kernel: Key type id_resolver registered Feb 9 18:39:56.572999 kernel: Key type id_legacy registered Feb 9 18:39:56.770068 kubelet[1897]: E0209 18:39:56.770032 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:56.849245 nfsidmap[3175]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-aae0fbc2cf' Feb 9 18:39:56.950375 nfsidmap[3177]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-aae0fbc2cf' Feb 9 18:39:57.106562 env[1377]: time="2024-02-09T18:39:57.106521451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cf0acc3e-674e-435e-9a03-bdc564b86bdb,Namespace:default,Attempt:0,}" Feb 9 18:39:57.163229 systemd-networkd[1528]: lxccf37771a08e2: Link UP Feb 9 18:39:57.174859 kernel: eth0: renamed from tmp3e5dc Feb 9 18:39:57.192236 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:39:57.192380 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf37771a08e2: link becomes ready Feb 9 18:39:57.192688 systemd-networkd[1528]: lxccf37771a08e2: Gained carrier Feb 9 18:39:57.416278 env[1377]: time="2024-02-09T18:39:57.416152559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:39:57.416420 env[1377]: time="2024-02-09T18:39:57.416192494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:39:57.416420 env[1377]: time="2024-02-09T18:39:57.416202607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:39:57.416695 env[1377]: time="2024-02-09T18:39:57.416655282Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e5dc8040efcfaa0566751a1a36ee699e339a93b395ee51492ee5675cfd739bc pid=3203 runtime=io.containerd.runc.v2 Feb 9 18:39:57.431119 systemd[1]: Started cri-containerd-3e5dc8040efcfaa0566751a1a36ee699e339a93b395ee51492ee5675cfd739bc.scope. Feb 9 18:39:57.464732 env[1377]: time="2024-02-09T18:39:57.464695683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:cf0acc3e-674e-435e-9a03-bdc564b86bdb,Namespace:default,Attempt:0,} returns sandbox id \"3e5dc8040efcfaa0566751a1a36ee699e339a93b395ee51492ee5675cfd739bc\"" Feb 9 18:39:57.466915 env[1377]: time="2024-02-09T18:39:57.466888499Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:39:57.771393 kubelet[1897]: E0209 18:39:57.771293 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:57.789007 env[1377]: time="2024-02-09T18:39:57.788969270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:57.797456 env[1377]: time="2024-02-09T18:39:57.797420656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:57.801074 env[1377]: time="2024-02-09T18:39:57.801036414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:57.807462 env[1377]: time="2024-02-09T18:39:57.807425862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:39:57.808229 env[1377]: time="2024-02-09T18:39:57.808192818Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 18:39:57.810315 env[1377]: time="2024-02-09T18:39:57.810267468Z" level=info msg="CreateContainer within sandbox \"3e5dc8040efcfaa0566751a1a36ee699e339a93b395ee51492ee5675cfd739bc\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 18:39:57.858356 env[1377]: time="2024-02-09T18:39:57.858306590Z" level=info msg="CreateContainer within sandbox \"3e5dc8040efcfaa0566751a1a36ee699e339a93b395ee51492ee5675cfd739bc\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e5febd01531a70c12c6ec48eb70f7291954a593f18cba82613a669135affb12d\"" Feb 9 18:39:57.858837 env[1377]: time="2024-02-09T18:39:57.858810632Z" level=info msg="StartContainer for \"e5febd01531a70c12c6ec48eb70f7291954a593f18cba82613a669135affb12d\"" Feb 9 18:39:57.873315 systemd[1]: Started cri-containerd-e5febd01531a70c12c6ec48eb70f7291954a593f18cba82613a669135affb12d.scope. Feb 9 18:39:57.911214 env[1377]: time="2024-02-09T18:39:57.911177263Z" level=info msg="StartContainer for \"e5febd01531a70c12c6ec48eb70f7291954a593f18cba82613a669135affb12d\" returns successfully" Feb 9 18:39:57.941069 kubelet[1897]: I0209 18:39:57.941031 1897 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.598363764 podCreationTimestamp="2024-02-09 18:39:42 +0000 UTC" firstStartedPulling="2024-02-09 18:39:57.466292835 +0000 UTC m=+54.762740280" lastFinishedPulling="2024-02-09 18:39:57.8089178 +0000 UTC m=+55.105365205" observedRunningTime="2024-02-09 18:39:57.940690877 +0000 UTC m=+55.237138282" watchObservedRunningTime="2024-02-09 18:39:57.940988689 +0000 UTC m=+55.237436134" Feb 9 18:39:58.136279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110606897.mount: Deactivated successfully. Feb 9 18:39:58.772251 kubelet[1897]: E0209 18:39:58.772202 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:39:59.211047 systemd-networkd[1528]: lxccf37771a08e2: Gained IPv6LL Feb 9 18:39:59.773148 kubelet[1897]: E0209 18:39:59.773117 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:00.773618 kubelet[1897]: E0209 18:40:00.773584 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:01.774712 kubelet[1897]: E0209 18:40:01.774677 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:02.776060 kubelet[1897]: E0209 18:40:02.776025 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:03.482394 systemd[1]: run-containerd-runc-k8s.io-71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7-runc.wummNJ.mount: Deactivated successfully. Feb 9 18:40:03.497026 env[1377]: time="2024-02-09T18:40:03.496962270Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:40:03.503330 env[1377]: time="2024-02-09T18:40:03.503293097Z" level=info msg="StopContainer for \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\" with timeout 1 (s)" Feb 9 18:40:03.503633 env[1377]: time="2024-02-09T18:40:03.503610283Z" level=info msg="Stop container \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\" with signal terminated" Feb 9 18:40:03.509906 systemd-networkd[1528]: lxc_health: Link DOWN Feb 9 18:40:03.509911 systemd-networkd[1528]: lxc_health: Lost carrier Feb 9 18:40:03.529195 systemd[1]: cri-containerd-71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7.scope: Deactivated successfully. Feb 9 18:40:03.529481 systemd[1]: cri-containerd-71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7.scope: Consumed 6.104s CPU time. Feb 9 18:40:03.544587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7-rootfs.mount: Deactivated successfully. Feb 9 18:40:03.732159 kubelet[1897]: E0209 18:40:03.732117 1897 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:03.777136 kubelet[1897]: E0209 18:40:03.776602 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:03.809585 kubelet[1897]: E0209 18:40:03.809550 1897 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:04.107380 env[1377]: time="2024-02-09T18:40:04.107320877Z" level=info msg="shim disconnected" id=71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7 Feb 9 18:40:04.107380 env[1377]: time="2024-02-09T18:40:04.107379967Z" level=warning msg="cleaning up after shim disconnected" id=71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7 namespace=k8s.io Feb 9 18:40:04.107570 env[1377]: time="2024-02-09T18:40:04.107390231Z" level=info msg="cleaning up dead shim" Feb 9 18:40:04.113757 env[1377]: time="2024-02-09T18:40:04.113710154Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3334 runtime=io.containerd.runc.v2\n" Feb 9 18:40:04.120627 env[1377]: time="2024-02-09T18:40:04.120588691Z" level=info msg="StopContainer for \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\" returns successfully" Feb 9 18:40:04.121174 env[1377]: time="2024-02-09T18:40:04.121144245Z" level=info msg="StopPodSandbox for \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\"" Feb 9 18:40:04.121233 env[1377]: time="2024-02-09T18:40:04.121202047Z" level=info msg="Container to stop \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:04.121233 env[1377]: time="2024-02-09T18:40:04.121216256Z" level=info msg="Container to stop \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:04.121284 env[1377]: time="2024-02-09T18:40:04.121229056Z" level=info msg="Container to stop \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:04.121284 env[1377]: time="2024-02-09T18:40:04.121240568Z" level=info msg="Container to stop \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:04.121284 env[1377]: time="2024-02-09T18:40:04.121251677Z" level=info msg="Container to stop \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:04.122599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d-shm.mount: Deactivated successfully. Feb 9 18:40:04.128601 systemd[1]: cri-containerd-75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d.scope: Deactivated successfully. Feb 9 18:40:04.159064 env[1377]: time="2024-02-09T18:40:04.159021119Z" level=info msg="shim disconnected" id=75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d Feb 9 18:40:04.159287 env[1377]: time="2024-02-09T18:40:04.159270358Z" level=warning msg="cleaning up after shim disconnected" id=75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d namespace=k8s.io Feb 9 18:40:04.159368 env[1377]: time="2024-02-09T18:40:04.159355409Z" level=info msg="cleaning up dead shim" Feb 9 18:40:04.166343 env[1377]: time="2024-02-09T18:40:04.166301609Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3366 runtime=io.containerd.runc.v2\n" Feb 9 18:40:04.166733 env[1377]: time="2024-02-09T18:40:04.166708917Z" level=info msg="TearDown network for sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" successfully" Feb 9 18:40:04.166861 env[1377]: time="2024-02-09T18:40:04.166842190Z" level=info msg="StopPodSandbox for \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" returns successfully" Feb 9 18:40:04.243023 kubelet[1897]: I0209 18:40:04.242989 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-hostproc\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243023 kubelet[1897]: I0209 18:40:04.243025 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-run\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243201 kubelet[1897]: I0209 18:40:04.243043 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-cgroup\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243201 kubelet[1897]: I0209 18:40:04.243064 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-etc-cni-netd\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243201 kubelet[1897]: I0209 18:40:04.243080 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-xtables-lock\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243201 kubelet[1897]: I0209 18:40:04.243097 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cni-path\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243201 kubelet[1897]: I0209 18:40:04.243124 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-hubble-tls\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243201 kubelet[1897]: I0209 18:40:04.243147 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxtvz\" (UniqueName: \"kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-kube-api-access-dxtvz\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243349 kubelet[1897]: I0209 18:40:04.243166 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-net\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243349 kubelet[1897]: I0209 18:40:04.243185 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-kernel\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243349 kubelet[1897]: I0209 18:40:04.243207 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-config-path\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243349 kubelet[1897]: I0209 18:40:04.243228 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/359db0a9-cf87-4f6c-9039-82fcc37ad555-clustermesh-secrets\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243349 kubelet[1897]: I0209 18:40:04.243244 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-bpf-maps\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243349 kubelet[1897]: I0209 18:40:04.243260 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-lib-modules\") pod \"359db0a9-cf87-4f6c-9039-82fcc37ad555\" (UID: \"359db0a9-cf87-4f6c-9039-82fcc37ad555\") " Feb 9 18:40:04.243490 kubelet[1897]: I0209 18:40:04.243314 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.243490 kubelet[1897]: I0209 18:40:04.243343 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-hostproc" (OuterVolumeSpecName: "hostproc") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.243490 kubelet[1897]: I0209 18:40:04.243359 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.243490 kubelet[1897]: I0209 18:40:04.243374 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.243490 kubelet[1897]: I0209 18:40:04.243388 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.243606 kubelet[1897]: I0209 18:40:04.243402 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.243606 kubelet[1897]: I0209 18:40:04.243416 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cni-path" (OuterVolumeSpecName: "cni-path") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.245169 kubelet[1897]: I0209 18:40:04.243713 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.245169 kubelet[1897]: W0209 18:40:04.244048 1897 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/359db0a9-cf87-4f6c-9039-82fcc37ad555/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:04.246228 kubelet[1897]: I0209 18:40:04.246202 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.246411 kubelet[1897]: I0209 18:40:04.246377 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:04.246478 kubelet[1897]: I0209 18:40:04.246459 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:04.246921 kubelet[1897]: I0209 18:40:04.246897 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:04.248113 kubelet[1897]: I0209 18:40:04.248077 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359db0a9-cf87-4f6c-9039-82fcc37ad555-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:04.248961 kubelet[1897]: I0209 18:40:04.248939 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-kube-api-access-dxtvz" (OuterVolumeSpecName: "kube-api-access-dxtvz") pod "359db0a9-cf87-4f6c-9039-82fcc37ad555" (UID: "359db0a9-cf87-4f6c-9039-82fcc37ad555"). InnerVolumeSpecName "kube-api-access-dxtvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:04.343516 kubelet[1897]: I0209 18:40:04.343484 1897 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-config-path\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.343687 kubelet[1897]: I0209 18:40:04.343675 1897 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/359db0a9-cf87-4f6c-9039-82fcc37ad555-clustermesh-secrets\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.343749 kubelet[1897]: I0209 18:40:04.343741 1897 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-bpf-maps\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.343835 kubelet[1897]: I0209 18:40:04.343826 1897 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-lib-modules\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.343907 kubelet[1897]: I0209 18:40:04.343899 1897 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-etc-cni-netd\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.343963 kubelet[1897]: I0209 18:40:04.343954 1897 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-hostproc\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344041 kubelet[1897]: I0209 18:40:04.344033 1897 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-run\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344098 kubelet[1897]: I0209 18:40:04.344091 1897 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cilium-cgroup\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344154 kubelet[1897]: I0209 18:40:04.344146 1897 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-xtables-lock\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344209 kubelet[1897]: I0209 18:40:04.344201 1897 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-cni-path\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344269 kubelet[1897]: I0209 18:40:04.344260 1897 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-kernel\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344327 kubelet[1897]: I0209 18:40:04.344320 1897 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-hubble-tls\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344385 kubelet[1897]: I0209 18:40:04.344377 1897 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dxtvz\" (UniqueName: \"kubernetes.io/projected/359db0a9-cf87-4f6c-9039-82fcc37ad555-kube-api-access-dxtvz\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.344440 kubelet[1897]: I0209 18:40:04.344432 1897 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/359db0a9-cf87-4f6c-9039-82fcc37ad555-host-proc-sys-net\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:04.478581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d-rootfs.mount: Deactivated successfully. Feb 9 18:40:04.478672 systemd[1]: var-lib-kubelet-pods-359db0a9\x2dcf87\x2d4f6c\x2d9039\x2d82fcc37ad555-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxtvz.mount: Deactivated successfully. Feb 9 18:40:04.478732 systemd[1]: var-lib-kubelet-pods-359db0a9\x2dcf87\x2d4f6c\x2d9039\x2d82fcc37ad555-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:04.478801 systemd[1]: var-lib-kubelet-pods-359db0a9\x2dcf87\x2d4f6c\x2d9039\x2d82fcc37ad555-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:04.777728 kubelet[1897]: E0209 18:40:04.777623 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:04.946555 kubelet[1897]: I0209 18:40:04.946533 1897 scope.go:115] "RemoveContainer" containerID="71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7" Feb 9 18:40:04.948251 env[1377]: time="2024-02-09T18:40:04.948209969Z" level=info msg="RemoveContainer for \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\"" Feb 9 18:40:04.950414 systemd[1]: Removed slice kubepods-burstable-pod359db0a9_cf87_4f6c_9039_82fcc37ad555.slice. Feb 9 18:40:04.950490 systemd[1]: kubepods-burstable-pod359db0a9_cf87_4f6c_9039_82fcc37ad555.slice: Consumed 6.187s CPU time. Feb 9 18:40:04.956868 env[1377]: time="2024-02-09T18:40:04.956831687Z" level=info msg="RemoveContainer for \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\" returns successfully" Feb 9 18:40:04.957552 kubelet[1897]: I0209 18:40:04.957523 1897 scope.go:115] "RemoveContainer" containerID="2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d" Feb 9 18:40:04.958615 env[1377]: time="2024-02-09T18:40:04.958372966Z" level=info msg="RemoveContainer for \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\"" Feb 9 18:40:04.968148 env[1377]: time="2024-02-09T18:40:04.968073310Z" level=info msg="RemoveContainer for \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\" returns successfully" Feb 9 18:40:04.968368 kubelet[1897]: I0209 18:40:04.968353 1897 scope.go:115] "RemoveContainer" containerID="0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea" Feb 9 18:40:04.969493 env[1377]: time="2024-02-09T18:40:04.969457848Z" level=info msg="RemoveContainer for \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\"" Feb 9 18:40:04.982896 env[1377]: time="2024-02-09T18:40:04.982850845Z" level=info msg="RemoveContainer for \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\" returns successfully" Feb 9 18:40:04.983086 kubelet[1897]: I0209 18:40:04.983068 1897 scope.go:115] "RemoveContainer" containerID="fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1" Feb 9 18:40:04.984159 env[1377]: time="2024-02-09T18:40:04.984125335Z" level=info msg="RemoveContainer for \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\"" Feb 9 18:40:04.996463 env[1377]: time="2024-02-09T18:40:04.996426866Z" level=info msg="RemoveContainer for \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\" returns successfully" Feb 9 18:40:04.996632 kubelet[1897]: I0209 18:40:04.996607 1897 scope.go:115] "RemoveContainer" containerID="f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b" Feb 9 18:40:04.997613 env[1377]: time="2024-02-09T18:40:04.997577744Z" level=info msg="RemoveContainer for \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\"" Feb 9 18:40:05.007130 env[1377]: time="2024-02-09T18:40:05.007099853Z" level=info msg="RemoveContainer for \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\" returns successfully" Feb 9 18:40:05.007290 kubelet[1897]: I0209 18:40:05.007266 1897 scope.go:115] "RemoveContainer" containerID="71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7" Feb 9 18:40:05.007599 env[1377]: time="2024-02-09T18:40:05.007524274Z" level=error msg="ContainerStatus for \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\": not found" Feb 9 18:40:05.007747 kubelet[1897]: E0209 18:40:05.007722 1897 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\": not found" containerID="71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7" Feb 9 18:40:05.007799 kubelet[1897]: I0209 18:40:05.007760 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7} err="failed to get container status \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"71294baffe0d29056cb0030dfc7c76cd804dc87c3493ce0baab889bf5324d9a7\": not found" Feb 9 18:40:05.007799 kubelet[1897]: I0209 18:40:05.007772 1897 scope.go:115] "RemoveContainer" containerID="2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d" Feb 9 18:40:05.007976 env[1377]: time="2024-02-09T18:40:05.007926081Z" level=error msg="ContainerStatus for \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\": not found" Feb 9 18:40:05.008095 kubelet[1897]: E0209 18:40:05.008075 1897 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\": not found" containerID="2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d" Feb 9 18:40:05.008147 kubelet[1897]: I0209 18:40:05.008106 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d} err="failed to get container status \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"2458316e433cd9c1e0912693f777788b50f484a3567a0d159b0d42e8f614ca4d\": not found" Feb 9 18:40:05.008147 kubelet[1897]: I0209 18:40:05.008117 1897 scope.go:115] "RemoveContainer" containerID="0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea" Feb 9 18:40:05.008313 env[1377]: time="2024-02-09T18:40:05.008266890Z" level=error msg="ContainerStatus for \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\": not found" Feb 9 18:40:05.008532 kubelet[1897]: E0209 18:40:05.008435 1897 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\": not found" containerID="0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea" Feb 9 18:40:05.008532 kubelet[1897]: I0209 18:40:05.008475 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea} err="failed to get container status \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f18544dcec0956ccf0e63f6a417a51c7401cbf12d25c9b69c391779f6c72dea\": not found" Feb 9 18:40:05.008532 kubelet[1897]: I0209 18:40:05.008487 1897 scope.go:115] "RemoveContainer" containerID="fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1" Feb 9 18:40:05.008800 env[1377]: time="2024-02-09T18:40:05.008741607Z" level=error msg="ContainerStatus for \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\": not found" Feb 9 18:40:05.008951 kubelet[1897]: E0209 18:40:05.008932 1897 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\": not found" containerID="fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1" Feb 9 18:40:05.008993 kubelet[1897]: I0209 18:40:05.008964 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1} err="failed to get container status \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb717a9478548a0e33cf50247afb8d76c99049f0c07269dba47353891bc9f4a1\": not found" Feb 9 18:40:05.008993 kubelet[1897]: I0209 18:40:05.008975 1897 scope.go:115] "RemoveContainer" containerID="f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b" Feb 9 18:40:05.009168 env[1377]: time="2024-02-09T18:40:05.009120641Z" level=error msg="ContainerStatus for \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\": not found" Feb 9 18:40:05.009289 kubelet[1897]: E0209 18:40:05.009267 1897 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\": not found" containerID="f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b" Feb 9 18:40:05.009327 kubelet[1897]: I0209 18:40:05.009296 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b} err="failed to get container status \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1f2b8a6248e6f42fa18d2ed883e2b588be7a02effd02e14f6ddae1ab1c62d6b\": not found" Feb 9 18:40:05.778047 kubelet[1897]: E0209 18:40:05.778008 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:05.803826 kubelet[1897]: I0209 18:40:05.803560 1897 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=359db0a9-cf87-4f6c-9039-82fcc37ad555 path="/var/lib/kubelet/pods/359db0a9-cf87-4f6c-9039-82fcc37ad555/volumes" Feb 9 18:40:06.407260 kubelet[1897]: I0209 18:40:06.407231 1897 setters.go:548] "Node became not ready" node="10.200.20.14" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:40:06.407179812 +0000 UTC m=+63.703627257 LastTransitionTime:2024-02-09 18:40:06.407179812 +0000 UTC m=+63.703627257 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:40:06.779023 kubelet[1897]: E0209 18:40:06.778921 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:07.200741 kubelet[1897]: I0209 18:40:07.200713 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:40:07.200959 kubelet[1897]: E0209 18:40:07.200946 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="359db0a9-cf87-4f6c-9039-82fcc37ad555" containerName="mount-cgroup" Feb 9 18:40:07.201040 kubelet[1897]: E0209 18:40:07.201031 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="359db0a9-cf87-4f6c-9039-82fcc37ad555" containerName="apply-sysctl-overwrites" Feb 9 18:40:07.201117 kubelet[1897]: E0209 18:40:07.201109 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="359db0a9-cf87-4f6c-9039-82fcc37ad555" containerName="mount-bpf-fs" Feb 9 18:40:07.201180 kubelet[1897]: E0209 18:40:07.201165 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="359db0a9-cf87-4f6c-9039-82fcc37ad555" containerName="clean-cilium-state" Feb 9 18:40:07.201237 kubelet[1897]: E0209 18:40:07.201229 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="359db0a9-cf87-4f6c-9039-82fcc37ad555" containerName="cilium-agent" Feb 9 18:40:07.201313 kubelet[1897]: I0209 18:40:07.201305 1897 memory_manager.go:346] "RemoveStaleState removing state" podUID="359db0a9-cf87-4f6c-9039-82fcc37ad555" containerName="cilium-agent" Feb 9 18:40:07.206025 systemd[1]: Created slice kubepods-besteffort-podf8a21661_f5ef_4c86_b266_da176c8564cb.slice. Feb 9 18:40:07.249207 kubelet[1897]: I0209 18:40:07.249170 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:40:07.256430 systemd[1]: Created slice kubepods-burstable-poddf8f7e9e_fec2_45e4_ad0c_f3a69b47614e.slice. Feb 9 18:40:07.261447 kubelet[1897]: I0209 18:40:07.261420 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8a21661-f5ef-4c86-b266-da176c8564cb-cilium-config-path\") pod \"cilium-operator-574c4bb98d-2cm6d\" (UID: \"f8a21661-f5ef-4c86-b266-da176c8564cb\") " pod="kube-system/cilium-operator-574c4bb98d-2cm6d" Feb 9 18:40:07.261567 kubelet[1897]: I0209 18:40:07.261458 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnq8t\" (UniqueName: \"kubernetes.io/projected/f8a21661-f5ef-4c86-b266-da176c8564cb-kube-api-access-tnq8t\") pod \"cilium-operator-574c4bb98d-2cm6d\" (UID: \"f8a21661-f5ef-4c86-b266-da176c8564cb\") " pod="kube-system/cilium-operator-574c4bb98d-2cm6d" Feb 9 18:40:07.361993 kubelet[1897]: I0209 18:40:07.361911 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-kernel\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362210 kubelet[1897]: I0209 18:40:07.362198 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-run\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362312 kubelet[1897]: I0209 18:40:07.362302 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cni-path\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362401 kubelet[1897]: I0209 18:40:07.362392 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-ipsec-secrets\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362482 kubelet[1897]: I0209 18:40:07.362473 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-bpf-maps\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362593 kubelet[1897]: I0209 18:40:07.362583 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hostproc\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362690 kubelet[1897]: I0209 18:40:07.362681 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-cgroup\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362763 kubelet[1897]: I0209 18:40:07.362754 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-lib-modules\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362872 kubelet[1897]: I0209 18:40:07.362862 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hubble-tls\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.362951 kubelet[1897]: I0209 18:40:07.362941 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfl2z\" (UniqueName: \"kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-kube-api-access-nfl2z\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.363043 kubelet[1897]: I0209 18:40:07.363033 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-etc-cni-netd\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.363203 kubelet[1897]: I0209 18:40:07.363174 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-xtables-lock\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.363254 kubelet[1897]: I0209 18:40:07.363211 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-clustermesh-secrets\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.363254 kubelet[1897]: I0209 18:40:07.363232 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-config-path\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.363254 kubelet[1897]: I0209 18:40:07.363250 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-net\") pod \"cilium-7mk4l\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " pod="kube-system/cilium-7mk4l" Feb 9 18:40:07.508843 env[1377]: time="2024-02-09T18:40:07.508727280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-2cm6d,Uid:f8a21661-f5ef-4c86-b266-da176c8564cb,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:07.542798 env[1377]: time="2024-02-09T18:40:07.542614002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:07.542798 env[1377]: time="2024-02-09T18:40:07.542650272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:07.542798 env[1377]: time="2024-02-09T18:40:07.542660043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:07.543127 env[1377]: time="2024-02-09T18:40:07.543083621Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43de8f1307ab98451b2cd151aaaf25d73e8fb52129e25a3d890eceb6e9bafcca pid=3393 runtime=io.containerd.runc.v2 Feb 9 18:40:07.552635 systemd[1]: Started cri-containerd-43de8f1307ab98451b2cd151aaaf25d73e8fb52129e25a3d890eceb6e9bafcca.scope. Feb 9 18:40:07.564472 env[1377]: time="2024-02-09T18:40:07.564436432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mk4l,Uid:df8f7e9e-fec2-45e4-ad0c-f3a69b47614e,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:07.583556 env[1377]: time="2024-02-09T18:40:07.583509867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-2cm6d,Uid:f8a21661-f5ef-4c86-b266-da176c8564cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"43de8f1307ab98451b2cd151aaaf25d73e8fb52129e25a3d890eceb6e9bafcca\"" Feb 9 18:40:07.585403 env[1377]: time="2024-02-09T18:40:07.585366751Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:40:07.608133 env[1377]: time="2024-02-09T18:40:07.608054994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:07.608133 env[1377]: time="2024-02-09T18:40:07.608102041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:07.608133 env[1377]: time="2024-02-09T18:40:07.608112174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:07.608528 env[1377]: time="2024-02-09T18:40:07.608490354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40 pid=3433 runtime=io.containerd.runc.v2 Feb 9 18:40:07.618400 systemd[1]: Started cri-containerd-ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40.scope. Feb 9 18:40:07.641755 env[1377]: time="2024-02-09T18:40:07.641702303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7mk4l,Uid:df8f7e9e-fec2-45e4-ad0c-f3a69b47614e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\"" Feb 9 18:40:07.644397 env[1377]: time="2024-02-09T18:40:07.644362996Z" level=info msg="CreateContainer within sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:07.678874 env[1377]: time="2024-02-09T18:40:07.678830037Z" level=info msg="CreateContainer within sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\"" Feb 9 18:40:07.679506 env[1377]: time="2024-02-09T18:40:07.679482654Z" level=info msg="StartContainer for \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\"" Feb 9 18:40:07.693505 systemd[1]: Started cri-containerd-1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655.scope. Feb 9 18:40:07.704220 systemd[1]: cri-containerd-1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655.scope: Deactivated successfully. Feb 9 18:40:07.704469 systemd[1]: Stopped cri-containerd-1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655.scope. Feb 9 18:40:07.768806 env[1377]: time="2024-02-09T18:40:07.768670795Z" level=info msg="shim disconnected" id=1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655 Feb 9 18:40:07.768806 env[1377]: time="2024-02-09T18:40:07.768720213Z" level=warning msg="cleaning up after shim disconnected" id=1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655 namespace=k8s.io Feb 9 18:40:07.768806 env[1377]: time="2024-02-09T18:40:07.768729100Z" level=info msg="cleaning up dead shim" Feb 9 18:40:07.776717 env[1377]: time="2024-02-09T18:40:07.776665980Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3494 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:40:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:40:07.777026 env[1377]: time="2024-02-09T18:40:07.776931893Z" level=error msg="copy shim log" error="read /proc/self/fd/65: file already closed" Feb 9 18:40:07.777886 env[1377]: time="2024-02-09T18:40:07.777851026Z" level=error msg="Failed to pipe stdout of container \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\"" error="reading from a closed fifo" Feb 9 18:40:07.778594 env[1377]: time="2024-02-09T18:40:07.778561023Z" level=error msg="Failed to pipe stderr of container \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\"" error="reading from a closed fifo" Feb 9 18:40:07.779675 kubelet[1897]: E0209 18:40:07.779647 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:07.783249 env[1377]: time="2024-02-09T18:40:07.783196215Z" level=error msg="StartContainer for \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:40:07.783431 kubelet[1897]: E0209 18:40:07.783411 1897 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655" Feb 9 18:40:07.783536 kubelet[1897]: E0209 18:40:07.783517 1897 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:40:07.783536 kubelet[1897]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:40:07.783536 kubelet[1897]: rm /hostbin/cilium-mount Feb 9 18:40:07.783631 kubelet[1897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nfl2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-7mk4l_kube-system(df8f7e9e-fec2-45e4-ad0c-f3a69b47614e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:40:07.783631 kubelet[1897]: E0209 18:40:07.783559 1897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7mk4l" podUID=df8f7e9e-fec2-45e4-ad0c-f3a69b47614e Feb 9 18:40:07.956148 env[1377]: time="2024-02-09T18:40:07.956103659Z" level=info msg="CreateContainer within sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 18:40:07.995252 env[1377]: time="2024-02-09T18:40:07.995210717Z" level=info msg="CreateContainer within sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\"" Feb 9 18:40:07.996266 env[1377]: time="2024-02-09T18:40:07.996241394Z" level=info msg="StartContainer for \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\"" Feb 9 18:40:08.010126 systemd[1]: Started cri-containerd-a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18.scope. Feb 9 18:40:08.023108 systemd[1]: cri-containerd-a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18.scope: Deactivated successfully. Feb 9 18:40:08.040245 env[1377]: time="2024-02-09T18:40:08.040197698Z" level=info msg="shim disconnected" id=a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18 Feb 9 18:40:08.040449 env[1377]: time="2024-02-09T18:40:08.040432618Z" level=warning msg="cleaning up after shim disconnected" id=a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18 namespace=k8s.io Feb 9 18:40:08.040523 env[1377]: time="2024-02-09T18:40:08.040511125Z" level=info msg="cleaning up dead shim" Feb 9 18:40:08.047243 env[1377]: time="2024-02-09T18:40:08.047205263Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3532 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:40:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:40:08.047679 env[1377]: time="2024-02-09T18:40:08.047631729Z" level=error msg="copy shim log" error="read /proc/self/fd/70: file already closed" Feb 9 18:40:08.047931 env[1377]: time="2024-02-09T18:40:08.047901421Z" level=error msg="Failed to pipe stderr of container \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\"" error="reading from a closed fifo" Feb 9 18:40:08.048140 env[1377]: time="2024-02-09T18:40:08.048116885Z" level=error msg="Failed to pipe stdout of container \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\"" error="reading from a closed fifo" Feb 9 18:40:08.052167 env[1377]: time="2024-02-09T18:40:08.052122586Z" level=error msg="StartContainer for \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:40:08.052354 kubelet[1897]: E0209 18:40:08.052323 1897 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18" Feb 9 18:40:08.052456 kubelet[1897]: E0209 18:40:08.052419 1897 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:40:08.052456 kubelet[1897]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:40:08.052456 kubelet[1897]: rm /hostbin/cilium-mount Feb 9 18:40:08.052456 kubelet[1897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nfl2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-7mk4l_kube-system(df8f7e9e-fec2-45e4-ad0c-f3a69b47614e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:40:08.052456 kubelet[1897]: E0209 18:40:08.052452 1897 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7mk4l" podUID=df8f7e9e-fec2-45e4-ad0c-f3a69b47614e Feb 9 18:40:08.780641 kubelet[1897]: E0209 18:40:08.780606 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:08.810478 kubelet[1897]: E0209 18:40:08.810454 1897 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:08.961885 kubelet[1897]: I0209 18:40:08.961740 1897 scope.go:115] "RemoveContainer" containerID="1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655" Feb 9 18:40:08.962180 env[1377]: time="2024-02-09T18:40:08.962144543Z" level=info msg="StopPodSandbox for \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\"" Feb 9 18:40:08.965035 env[1377]: time="2024-02-09T18:40:08.962203996Z" level=info msg="Container to stop \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.965035 env[1377]: time="2024-02-09T18:40:08.962218829Z" level=info msg="Container to stop \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:40:08.963895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40-shm.mount: Deactivated successfully. Feb 9 18:40:08.967036 env[1377]: time="2024-02-09T18:40:08.966988021Z" level=info msg="RemoveContainer for \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\"" Feb 9 18:40:08.970116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2281526072.mount: Deactivated successfully. Feb 9 18:40:08.974613 systemd[1]: cri-containerd-ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40.scope: Deactivated successfully. Feb 9 18:40:08.999262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40-rootfs.mount: Deactivated successfully. Feb 9 18:40:09.017315 env[1377]: time="2024-02-09T18:40:09.017188861Z" level=info msg="shim disconnected" id=ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40 Feb 9 18:40:09.017315 env[1377]: time="2024-02-09T18:40:09.017234312Z" level=warning msg="cleaning up after shim disconnected" id=ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40 namespace=k8s.io Feb 9 18:40:09.017315 env[1377]: time="2024-02-09T18:40:09.017243515Z" level=info msg="cleaning up dead shim" Feb 9 18:40:09.024355 env[1377]: time="2024-02-09T18:40:09.024314416Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3564 runtime=io.containerd.runc.v2\n" Feb 9 18:40:09.024602 env[1377]: time="2024-02-09T18:40:09.024573825Z" level=info msg="TearDown network for sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" successfully" Feb 9 18:40:09.024650 env[1377]: time="2024-02-09T18:40:09.024603242Z" level=info msg="StopPodSandbox for \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" returns successfully" Feb 9 18:40:09.053333 env[1377]: time="2024-02-09T18:40:09.052612891Z" level=info msg="RemoveContainer for \"1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655\" returns successfully" Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075635 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-etc-cni-netd\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075672 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-bpf-maps\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075718 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075749 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hostproc\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075790 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-lib-modules\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075811 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-net\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075830 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-xtables-lock\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075864 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075882 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hostproc" (OuterVolumeSpecName: "hostproc") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075905 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075931 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075955 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-cgroup\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.075979 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfl2z\" (UniqueName: \"kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-kube-api-access-nfl2z\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.076013 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077104 kubelet[1897]: I0209 18:40:09.076031 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076243 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-run\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076271 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-config-path\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076290 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hubble-tls\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076335 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-kernel\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076357 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-ipsec-secrets\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076374 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cni-path\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076403 1897 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-clustermesh-secrets\") pod \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\" (UID: \"df8f7e9e-fec2-45e4-ad0c-f3a69b47614e\") " Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076436 1897 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-net\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076447 1897 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-xtables-lock\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076457 1897 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-cgroup\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076467 1897 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-etc-cni-netd\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076469 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076485 1897 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-bpf-maps\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076495 1897 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hostproc\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.077572 kubelet[1897]: I0209 18:40:09.076504 1897 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-lib-modules\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.077572 kubelet[1897]: W0209 18:40:09.076442 1897 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:40:09.078770 kubelet[1897]: I0209 18:40:09.078743 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cni-path" (OuterVolumeSpecName: "cni-path") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.079241 kubelet[1897]: I0209 18:40:09.079205 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:40:09.079977 kubelet[1897]: I0209 18:40:09.079943 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:40:09.080060 kubelet[1897]: I0209 18:40:09.080022 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-kube-api-access-nfl2z" (OuterVolumeSpecName: "kube-api-access-nfl2z") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "kube-api-access-nfl2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:09.087007 kubelet[1897]: I0209 18:40:09.086982 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:40:09.087234 kubelet[1897]: I0209 18:40:09.087143 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:09.087323 kubelet[1897]: I0209 18:40:09.087197 1897 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" (UID: "df8f7e9e-fec2-45e4-ad0c-f3a69b47614e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:40:09.177615 kubelet[1897]: I0209 18:40:09.177578 1897 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nfl2z\" (UniqueName: \"kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-kube-api-access-nfl2z\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.177810 kubelet[1897]: I0209 18:40:09.177799 1897 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-run\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.177886 kubelet[1897]: I0209 18:40:09.177877 1897 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-hubble-tls\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.177955 kubelet[1897]: I0209 18:40:09.177947 1897 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-config-path\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.178025 kubelet[1897]: I0209 18:40:09.178014 1897 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cni-path\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.178095 kubelet[1897]: I0209 18:40:09.178086 1897 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-clustermesh-secrets\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.178166 kubelet[1897]: I0209 18:40:09.178158 1897 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-host-proc-sys-kernel\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.178236 kubelet[1897]: I0209 18:40:09.178225 1897 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e-cilium-ipsec-secrets\") on node \"10.200.20.14\" DevicePath \"\"" Feb 9 18:40:09.375119 systemd[1]: var-lib-kubelet-pods-df8f7e9e\x2dfec2\x2d45e4\x2dad0c\x2df3a69b47614e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnfl2z.mount: Deactivated successfully. Feb 9 18:40:09.375209 systemd[1]: var-lib-kubelet-pods-df8f7e9e\x2dfec2\x2d45e4\x2dad0c\x2df3a69b47614e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:40:09.375261 systemd[1]: var-lib-kubelet-pods-df8f7e9e\x2dfec2\x2d45e4\x2dad0c\x2df3a69b47614e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:09.375311 systemd[1]: var-lib-kubelet-pods-df8f7e9e\x2dfec2\x2d45e4\x2dad0c\x2df3a69b47614e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:40:09.595109 env[1377]: time="2024-02-09T18:40:09.595064654Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:09.602793 env[1377]: time="2024-02-09T18:40:09.602752148Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:09.607393 env[1377]: time="2024-02-09T18:40:09.607366806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:40:09.607815 env[1377]: time="2024-02-09T18:40:09.607773099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:40:09.609587 env[1377]: time="2024-02-09T18:40:09.609546079Z" level=info msg="CreateContainer within sandbox \"43de8f1307ab98451b2cd151aaaf25d73e8fb52129e25a3d890eceb6e9bafcca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:40:09.634624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379341571.mount: Deactivated successfully. Feb 9 18:40:09.640177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867704063.mount: Deactivated successfully. Feb 9 18:40:09.654314 env[1377]: time="2024-02-09T18:40:09.654272767Z" level=info msg="CreateContainer within sandbox \"43de8f1307ab98451b2cd151aaaf25d73e8fb52129e25a3d890eceb6e9bafcca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"90a8718cb63b1a861fb7a6ad386eb6826a10d15cb49fd049787cc9dcdab1a79b\"" Feb 9 18:40:09.654914 env[1377]: time="2024-02-09T18:40:09.654891570Z" level=info msg="StartContainer for \"90a8718cb63b1a861fb7a6ad386eb6826a10d15cb49fd049787cc9dcdab1a79b\"" Feb 9 18:40:09.668564 systemd[1]: Started cri-containerd-90a8718cb63b1a861fb7a6ad386eb6826a10d15cb49fd049787cc9dcdab1a79b.scope. Feb 9 18:40:09.693808 env[1377]: time="2024-02-09T18:40:09.693744615Z" level=info msg="StartContainer for \"90a8718cb63b1a861fb7a6ad386eb6826a10d15cb49fd049787cc9dcdab1a79b\" returns successfully" Feb 9 18:40:09.781696 kubelet[1897]: E0209 18:40:09.781631 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:09.807331 systemd[1]: Removed slice kubepods-burstable-poddf8f7e9e_fec2_45e4_ad0c_f3a69b47614e.slice. Feb 9 18:40:09.964771 kubelet[1897]: I0209 18:40:09.964680 1897 scope.go:115] "RemoveContainer" containerID="a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18" Feb 9 18:40:09.967353 env[1377]: time="2024-02-09T18:40:09.967313693Z" level=info msg="RemoveContainer for \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\"" Feb 9 18:40:09.978971 env[1377]: time="2024-02-09T18:40:09.978927518Z" level=info msg="RemoveContainer for \"a7cc2ec387a7bdcb3fcac44ef874d61707c3e46ffa8ac121a437ca58abf08d18\" returns successfully" Feb 9 18:40:09.994980 kubelet[1897]: I0209 18:40:09.994947 1897 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-2cm6d" podStartSLOduration=0.971655124 podCreationTimestamp="2024-02-09 18:40:07 +0000 UTC" firstStartedPulling="2024-02-09 18:40:07.584828774 +0000 UTC m=+64.881276219" lastFinishedPulling="2024-02-09 18:40:09.608078522 +0000 UTC m=+66.904525967" observedRunningTime="2024-02-09 18:40:09.993518533 +0000 UTC m=+67.289965938" watchObservedRunningTime="2024-02-09 18:40:09.994904872 +0000 UTC m=+67.291352277" Feb 9 18:40:10.028975 kubelet[1897]: I0209 18:40:10.028934 1897 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:40:10.029121 kubelet[1897]: E0209 18:40:10.029011 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" containerName="mount-cgroup" Feb 9 18:40:10.029121 kubelet[1897]: E0209 18:40:10.029022 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" containerName="mount-cgroup" Feb 9 18:40:10.029121 kubelet[1897]: I0209 18:40:10.029041 1897 memory_manager.go:346] "RemoveStaleState removing state" podUID="df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" containerName="mount-cgroup" Feb 9 18:40:10.029121 kubelet[1897]: I0209 18:40:10.029064 1897 memory_manager.go:346] "RemoveStaleState removing state" podUID="df8f7e9e-fec2-45e4-ad0c-f3a69b47614e" containerName="mount-cgroup" Feb 9 18:40:10.033655 systemd[1]: Created slice kubepods-burstable-poda6ce3d7b_4662_4498_af25_d6f8c6c20ac3.slice. Feb 9 18:40:10.082817 kubelet[1897]: I0209 18:40:10.082760 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-cilium-cgroup\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.082817 kubelet[1897]: I0209 18:40:10.082820 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-host-proc-sys-kernel\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.082994 kubelet[1897]: I0209 18:40:10.082855 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-hostproc\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.082994 kubelet[1897]: I0209 18:40:10.082886 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-lib-modules\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.082994 kubelet[1897]: I0209 18:40:10.082907 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-cilium-run\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.082994 kubelet[1897]: I0209 18:40:10.082925 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-host-proc-sys-net\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.082994 kubelet[1897]: I0209 18:40:10.082965 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-cilium-ipsec-secrets\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.082994 kubelet[1897]: I0209 18:40:10.082987 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-etc-cni-netd\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.083151 kubelet[1897]: I0209 18:40:10.083006 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-xtables-lock\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.083151 kubelet[1897]: I0209 18:40:10.083034 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-clustermesh-secrets\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.083151 kubelet[1897]: I0209 18:40:10.083055 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-cilium-config-path\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.083151 kubelet[1897]: I0209 18:40:10.083082 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-bpf-maps\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.083151 kubelet[1897]: I0209 18:40:10.083110 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-cni-path\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.083151 kubelet[1897]: I0209 18:40:10.083129 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-hubble-tls\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.083151 kubelet[1897]: I0209 18:40:10.083151 1897 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8495v\" (UniqueName: \"kubernetes.io/projected/a6ce3d7b-4662-4498-af25-d6f8c6c20ac3-kube-api-access-8495v\") pod \"cilium-v7m5c\" (UID: \"a6ce3d7b-4662-4498-af25-d6f8c6c20ac3\") " pod="kube-system/cilium-v7m5c" Feb 9 18:40:10.343468 env[1377]: time="2024-02-09T18:40:10.343110528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7m5c,Uid:a6ce3d7b-4662-4498-af25-d6f8c6c20ac3,Namespace:kube-system,Attempt:0,}" Feb 9 18:40:10.381684 env[1377]: time="2024-02-09T18:40:10.381600226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:40:10.381684 env[1377]: time="2024-02-09T18:40:10.381648237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:40:10.381964 env[1377]: time="2024-02-09T18:40:10.381658522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:40:10.381964 env[1377]: time="2024-02-09T18:40:10.381875918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5 pid=3632 runtime=io.containerd.runc.v2 Feb 9 18:40:10.396400 systemd[1]: Started cri-containerd-ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5.scope. Feb 9 18:40:10.419988 env[1377]: time="2024-02-09T18:40:10.419942033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7m5c,Uid:a6ce3d7b-4662-4498-af25-d6f8c6c20ac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\"" Feb 9 18:40:10.422794 env[1377]: time="2024-02-09T18:40:10.422746764Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:40:10.446478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928968945.mount: Deactivated successfully. Feb 9 18:40:10.457085 env[1377]: time="2024-02-09T18:40:10.457039370Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e\"" Feb 9 18:40:10.457739 env[1377]: time="2024-02-09T18:40:10.457700677Z" level=info msg="StartContainer for \"d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e\"" Feb 9 18:40:10.471052 systemd[1]: Started cri-containerd-d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e.scope. Feb 9 18:40:10.500217 systemd[1]: cri-containerd-d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e.scope: Deactivated successfully. Feb 9 18:40:10.501207 env[1377]: time="2024-02-09T18:40:10.501162033Z" level=info msg="StartContainer for \"d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e\" returns successfully" Feb 9 18:40:10.786375 kubelet[1897]: E0209 18:40:10.783864 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:10.795808 env[1377]: time="2024-02-09T18:40:10.795731613Z" level=info msg="shim disconnected" id=d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e Feb 9 18:40:10.795808 env[1377]: time="2024-02-09T18:40:10.795791637Z" level=warning msg="cleaning up after shim disconnected" id=d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e namespace=k8s.io Feb 9 18:40:10.795956 env[1377]: time="2024-02-09T18:40:10.795836595Z" level=info msg="cleaning up dead shim" Feb 9 18:40:10.802617 env[1377]: time="2024-02-09T18:40:10.802570319Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3717 runtime=io.containerd.runc.v2\n" Feb 9 18:40:10.872523 kubelet[1897]: W0209 18:40:10.872427 1897 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf8f7e9e_fec2_45e4_ad0c_f3a69b47614e.slice/cri-containerd-1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655.scope WatchSource:0}: container "1d2de6ca76fa7decbebcc9af8319f45d0837120a33b69ac692f1fffe014a3655" in namespace "k8s.io": not found Feb 9 18:40:10.975385 env[1377]: time="2024-02-09T18:40:10.975341619Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:40:11.007166 env[1377]: time="2024-02-09T18:40:11.007120776Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef\"" Feb 9 18:40:11.007896 env[1377]: time="2024-02-09T18:40:11.007870729Z" level=info msg="StartContainer for \"949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef\"" Feb 9 18:40:11.021323 systemd[1]: Started cri-containerd-949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef.scope. Feb 9 18:40:11.052123 systemd[1]: cri-containerd-949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef.scope: Deactivated successfully. Feb 9 18:40:11.058712 env[1377]: time="2024-02-09T18:40:11.058675190Z" level=info msg="StartContainer for \"949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef\" returns successfully" Feb 9 18:40:11.087107 env[1377]: time="2024-02-09T18:40:11.087055180Z" level=info msg="shim disconnected" id=949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef Feb 9 18:40:11.087107 env[1377]: time="2024-02-09T18:40:11.087105389Z" level=warning msg="cleaning up after shim disconnected" id=949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef namespace=k8s.io Feb 9 18:40:11.087335 env[1377]: time="2024-02-09T18:40:11.087116073Z" level=info msg="cleaning up dead shim" Feb 9 18:40:11.096814 env[1377]: time="2024-02-09T18:40:11.096751384Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3778 runtime=io.containerd.runc.v2\n" Feb 9 18:40:11.784815 kubelet[1897]: E0209 18:40:11.784765 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:11.803691 kubelet[1897]: I0209 18:40:11.803662 1897 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=df8f7e9e-fec2-45e4-ad0c-f3a69b47614e path="/var/lib/kubelet/pods/df8f7e9e-fec2-45e4-ad0c-f3a69b47614e/volumes" Feb 9 18:40:11.979769 env[1377]: time="2024-02-09T18:40:11.979721795Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:40:12.005978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024638610.mount: Deactivated successfully. Feb 9 18:40:12.010348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512507029.mount: Deactivated successfully. Feb 9 18:40:12.025053 env[1377]: time="2024-02-09T18:40:12.024986698Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36\"" Feb 9 18:40:12.025733 env[1377]: time="2024-02-09T18:40:12.025711178Z" level=info msg="StartContainer for \"487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36\"" Feb 9 18:40:12.041706 systemd[1]: Started cri-containerd-487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36.scope. Feb 9 18:40:12.070105 env[1377]: time="2024-02-09T18:40:12.070029534Z" level=info msg="StartContainer for \"487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36\" returns successfully" Feb 9 18:40:12.071558 systemd[1]: cri-containerd-487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36.scope: Deactivated successfully. Feb 9 18:40:12.102240 env[1377]: time="2024-02-09T18:40:12.102192404Z" level=info msg="shim disconnected" id=487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36 Feb 9 18:40:12.102240 env[1377]: time="2024-02-09T18:40:12.102237902Z" level=warning msg="cleaning up after shim disconnected" id=487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36 namespace=k8s.io Feb 9 18:40:12.102240 env[1377]: time="2024-02-09T18:40:12.102247620Z" level=info msg="cleaning up dead shim" Feb 9 18:40:12.108723 env[1377]: time="2024-02-09T18:40:12.108676460Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3836 runtime=io.containerd.runc.v2\n" Feb 9 18:40:12.785059 kubelet[1897]: E0209 18:40:12.785017 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:12.982077 env[1377]: time="2024-02-09T18:40:12.982030853Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:40:13.005602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054965990.mount: Deactivated successfully. Feb 9 18:40:13.009771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255723147.mount: Deactivated successfully. Feb 9 18:40:13.019825 env[1377]: time="2024-02-09T18:40:13.019766766Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c\"" Feb 9 18:40:13.020637 env[1377]: time="2024-02-09T18:40:13.020608604Z" level=info msg="StartContainer for \"f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c\"" Feb 9 18:40:13.034843 systemd[1]: Started cri-containerd-f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c.scope. Feb 9 18:40:13.059493 systemd[1]: cri-containerd-f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c.scope: Deactivated successfully. Feb 9 18:40:13.064951 env[1377]: time="2024-02-09T18:40:13.064868673Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ce3d7b_4662_4498_af25_d6f8c6c20ac3.slice/cri-containerd-f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c.scope/memory.events\": no such file or directory" Feb 9 18:40:13.067105 env[1377]: time="2024-02-09T18:40:13.067068018Z" level=info msg="StartContainer for \"f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c\" returns successfully" Feb 9 18:40:13.099499 env[1377]: time="2024-02-09T18:40:13.099449497Z" level=info msg="shim disconnected" id=f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c Feb 9 18:40:13.099499 env[1377]: time="2024-02-09T18:40:13.099495146Z" level=warning msg="cleaning up after shim disconnected" id=f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c namespace=k8s.io Feb 9 18:40:13.099499 env[1377]: time="2024-02-09T18:40:13.099505344Z" level=info msg="cleaning up dead shim" Feb 9 18:40:13.107752 env[1377]: time="2024-02-09T18:40:13.107708202Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:40:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3892 runtime=io.containerd.runc.v2\n" Feb 9 18:40:13.785734 kubelet[1897]: E0209 18:40:13.785697 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:13.814687 kubelet[1897]: E0209 18:40:13.811037 1897 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:40:13.983567 kubelet[1897]: W0209 18:40:13.983534 1897 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ce3d7b_4662_4498_af25_d6f8c6c20ac3.slice/cri-containerd-d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e.scope WatchSource:0}: task d10c0c35c91a0bff461cf68d247ed4c5df7c79bc01999e4303b10e62acce2f9e not found: not found Feb 9 18:40:13.987370 env[1377]: time="2024-02-09T18:40:13.987319650Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:40:14.010129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176647980.mount: Deactivated successfully. Feb 9 18:40:14.015101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940503042.mount: Deactivated successfully. Feb 9 18:40:14.027612 env[1377]: time="2024-02-09T18:40:14.027573006Z" level=info msg="CreateContainer within sandbox \"ee09a2a7b3c4a99dc1c185bb3162a6c88f82795db26573845b5cd1af886569b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b\"" Feb 9 18:40:14.028371 env[1377]: time="2024-02-09T18:40:14.028345951Z" level=info msg="StartContainer for \"5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b\"" Feb 9 18:40:14.042342 systemd[1]: Started cri-containerd-5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b.scope. Feb 9 18:40:14.079579 env[1377]: time="2024-02-09T18:40:14.079534650Z" level=info msg="StartContainer for \"5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b\" returns successfully" Feb 9 18:40:14.353796 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:40:14.786608 kubelet[1897]: E0209 18:40:14.786500 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:15.000785 kubelet[1897]: I0209 18:40:15.000747 1897 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-v7m5c" podStartSLOduration=5.000714032 podCreationTimestamp="2024-02-09 18:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:40:15.000557122 +0000 UTC m=+72.297004527" watchObservedRunningTime="2024-02-09 18:40:15.000714032 +0000 UTC m=+72.297161477" Feb 9 18:40:15.190006 systemd[1]: run-containerd-runc-k8s.io-5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b-runc.eikuNT.mount: Deactivated successfully. Feb 9 18:40:15.786995 kubelet[1897]: E0209 18:40:15.786955 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:16.787802 kubelet[1897]: E0209 18:40:16.787754 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:16.799758 systemd-networkd[1528]: lxc_health: Link UP Feb 9 18:40:16.814811 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:40:16.815055 systemd-networkd[1528]: lxc_health: Gained carrier Feb 9 18:40:17.092281 kubelet[1897]: W0209 18:40:17.092240 1897 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ce3d7b_4662_4498_af25_d6f8c6c20ac3.slice/cri-containerd-949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef.scope WatchSource:0}: task 949d8ceb3c8c4083a134a5bad6e3c8cb837e7ddeffdaa21aae0160e2d85a3bef not found: not found Feb 9 18:40:17.344557 systemd[1]: run-containerd-runc-k8s.io-5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b-runc.8RYjUd.mount: Deactivated successfully. Feb 9 18:40:17.788565 kubelet[1897]: E0209 18:40:17.788440 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:18.411007 systemd-networkd[1528]: lxc_health: Gained IPv6LL Feb 9 18:40:18.789252 kubelet[1897]: E0209 18:40:18.789140 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:19.541827 systemd[1]: run-containerd-runc-k8s.io-5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b-runc.FC75i2.mount: Deactivated successfully. Feb 9 18:40:19.790333 kubelet[1897]: E0209 18:40:19.790269 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:20.205241 kubelet[1897]: W0209 18:40:20.205196 1897 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ce3d7b_4662_4498_af25_d6f8c6c20ac3.slice/cri-containerd-487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36.scope WatchSource:0}: task 487664be3962023cfc83153fdb5dc5e8e5f3f47b2cc3d879c0a6421763b32c36 not found: not found Feb 9 18:40:20.790657 kubelet[1897]: E0209 18:40:20.790617 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:21.683564 systemd[1]: run-containerd-runc-k8s.io-5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b-runc.NeCy3q.mount: Deactivated successfully. Feb 9 18:40:21.790898 kubelet[1897]: E0209 18:40:21.790855 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:22.791387 kubelet[1897]: E0209 18:40:22.791353 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:23.313722 kubelet[1897]: W0209 18:40:23.313665 1897 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ce3d7b_4662_4498_af25_d6f8c6c20ac3.slice/cri-containerd-f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c.scope WatchSource:0}: task f523c53307c65c7c3fc1a9d90b3e6cbfb1dbd6fb5a2397df441873179bdc746c not found: not found Feb 9 18:40:23.732417 kubelet[1897]: E0209 18:40:23.732380 1897 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:23.791468 kubelet[1897]: E0209 18:40:23.791443 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:23.819198 systemd[1]: run-containerd-runc-k8s.io-5326152b9b98242c797ec16f1f8d5fc9063e06d9267fe408cf0b3068815a905b-runc.eROYMA.mount: Deactivated successfully. Feb 9 18:40:24.792842 kubelet[1897]: E0209 18:40:24.792807 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:25.793731 kubelet[1897]: E0209 18:40:25.793696 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:26.794479 kubelet[1897]: E0209 18:40:26.794440 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:27.794719 kubelet[1897]: E0209 18:40:27.794691 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:28.795635 kubelet[1897]: E0209 18:40:28.795596 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:29.795981 kubelet[1897]: E0209 18:40:29.795952 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:30.796667 kubelet[1897]: E0209 18:40:30.796620 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:31.797111 kubelet[1897]: E0209 18:40:31.797081 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:32.798539 kubelet[1897]: E0209 18:40:32.798510 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:33.799584 kubelet[1897]: E0209 18:40:33.799558 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:34.800368 kubelet[1897]: E0209 18:40:34.800309 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:35.800621 kubelet[1897]: E0209 18:40:35.800593 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:36.801508 kubelet[1897]: E0209 18:40:36.801477 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:37.802343 kubelet[1897]: E0209 18:40:37.802303 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:37.986980 kubelet[1897]: E0209 18:40:37.986943 1897 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:53412->10.200.20.43:2379: read: connection timed out" Feb 9 18:40:38.803108 kubelet[1897]: E0209 18:40:38.803069 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:39.803829 kubelet[1897]: E0209 18:40:39.803799 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:40.804682 kubelet[1897]: E0209 18:40:40.804650 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:41.805515 kubelet[1897]: E0209 18:40:41.805457 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:42.806509 kubelet[1897]: E0209 18:40:42.806468 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:43.731887 kubelet[1897]: E0209 18:40:43.731857 1897 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:43.807243 kubelet[1897]: E0209 18:40:43.807221 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:44.808508 kubelet[1897]: E0209 18:40:44.808479 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:45.809336 kubelet[1897]: E0209 18:40:45.809307 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:46.810389 kubelet[1897]: E0209 18:40:46.810351 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:46.946725 kubelet[1897]: E0209 18:40:46.946697 1897 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:40:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:40:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:40:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:40:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":55608803},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88\\\",\\\"registry.k8s.io/kube-proxy:v1.27.10\\\"],\\\"sizeBytes\\\":23037360},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":253553}]}}\" for node \"10.200.20.14\": Patch \"https://10.200.20.17:6443/api/v1/nodes/10.200.20.14/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:40:47.211456 kubelet[1897]: E0209 18:40:47.211424 1897 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.14\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.17:53304->10.200.20.43:2379: read: connection timed out" Feb 9 18:40:47.811479 kubelet[1897]: E0209 18:40:47.811456 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:47.987329 kubelet[1897]: E0209 18:40:47.987299 1897 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.14?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:40:48.812397 kubelet[1897]: E0209 18:40:48.812362 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:49.812531 kubelet[1897]: E0209 18:40:49.812480 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:50.813356 kubelet[1897]: E0209 18:40:50.813327 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:51.814027 kubelet[1897]: E0209 18:40:51.813997 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:52.814593 kubelet[1897]: E0209 18:40:52.814562 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:53.814733 kubelet[1897]: E0209 18:40:53.814700 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:54.815232 kubelet[1897]: E0209 18:40:54.815201 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:55.815922 kubelet[1897]: E0209 18:40:55.815893 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:56.816971 kubelet[1897]: E0209 18:40:56.816932 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:57.211922 kubelet[1897]: E0209 18:40:57.211885 1897 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.14\": Get \"https://10.200.20.17:6443/api/v1/nodes/10.200.20.14?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:40:57.817575 kubelet[1897]: E0209 18:40:57.817544 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:57.988373 kubelet[1897]: E0209 18:40:57.988342 1897 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.14?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:40:58.817684 kubelet[1897]: E0209 18:40:58.817653 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:40:59.818500 kubelet[1897]: E0209 18:40:59.818470 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:00.818590 kubelet[1897]: E0209 18:41:00.818555 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:01.818716 kubelet[1897]: E0209 18:41:01.818676 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:02.819763 kubelet[1897]: E0209 18:41:02.819735 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:03.732131 kubelet[1897]: E0209 18:41:03.732096 1897 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:03.739716 env[1377]: time="2024-02-09T18:41:03.739677714Z" level=info msg="StopPodSandbox for \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\"" Feb 9 18:41:03.740010 env[1377]: time="2024-02-09T18:41:03.739765471Z" level=info msg="TearDown network for sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" successfully" Feb 9 18:41:03.740010 env[1377]: time="2024-02-09T18:41:03.739810610Z" level=info msg="StopPodSandbox for \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" returns successfully" Feb 9 18:41:03.740155 env[1377]: time="2024-02-09T18:41:03.740120859Z" level=info msg="RemovePodSandbox for \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\"" Feb 9 18:41:03.740194 env[1377]: time="2024-02-09T18:41:03.740155154Z" level=info msg="Forcibly stopping sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\"" Feb 9 18:41:03.740239 env[1377]: time="2024-02-09T18:41:03.740216179Z" level=info msg="TearDown network for sandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" successfully" Feb 9 18:41:03.766209 env[1377]: time="2024-02-09T18:41:03.766166896Z" level=info msg="RemovePodSandbox \"ab1cfd58d6fef799b407b152d723a8b4b20c7b69892dcd34e1ebb98e72c90e40\" returns successfully" Feb 9 18:41:03.766571 env[1377]: time="2024-02-09T18:41:03.766539132Z" level=info msg="StopPodSandbox for \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\"" Feb 9 18:41:03.766660 env[1377]: time="2024-02-09T18:41:03.766608161Z" level=info msg="TearDown network for sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" successfully" Feb 9 18:41:03.766660 env[1377]: time="2024-02-09T18:41:03.766642335Z" level=info msg="StopPodSandbox for \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" returns successfully" Feb 9 18:41:03.767334 env[1377]: time="2024-02-09T18:41:03.766966710Z" level=info msg="RemovePodSandbox for \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\"" Feb 9 18:41:03.767334 env[1377]: time="2024-02-09T18:41:03.766995442Z" level=info msg="Forcibly stopping sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\"" Feb 9 18:41:03.767334 env[1377]: time="2024-02-09T18:41:03.767058789Z" level=info msg="TearDown network for sandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" successfully" Feb 9 18:41:03.776745 env[1377]: time="2024-02-09T18:41:03.776690691Z" level=info msg="RemovePodSandbox \"75204f0d18426434a8bbb4c411b50d80f159bf0af8668a9601ed2825ae43ce7d\" returns successfully" Feb 9 18:41:03.820415 kubelet[1897]: E0209 18:41:03.820397 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:04.485586 kubelet[1897]: E0209 18:41:04.485558 1897 desired_state_of_world_populator.go:295] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" pod="default/test-pod-1" volumeName="config" Feb 9 18:41:04.821943 kubelet[1897]: E0209 18:41:04.821620 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:05.823183 kubelet[1897]: E0209 18:41:05.823157 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:06.823923 kubelet[1897]: E0209 18:41:06.823890 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:07.212583 kubelet[1897]: E0209 18:41:07.212551 1897 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.14\": Get \"https://10.200.20.17:6443/api/v1/nodes/10.200.20.14?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:41:07.824042 kubelet[1897]: E0209 18:41:07.824003 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:07.988834 kubelet[1897]: E0209 18:41:07.988797 1897 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.14?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:41:08.824890 kubelet[1897]: E0209 18:41:08.824858 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:09.825984 kubelet[1897]: E0209 18:41:09.825960 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:10.827247 kubelet[1897]: E0209 18:41:10.827213 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:11.827861 kubelet[1897]: E0209 18:41:11.827822 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:12.828382 kubelet[1897]: E0209 18:41:12.828348 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:13.829085 kubelet[1897]: E0209 18:41:13.829054 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:14.830149 kubelet[1897]: E0209 18:41:14.830106 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:15.831168 kubelet[1897]: E0209 18:41:15.831143 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:16.831928 kubelet[1897]: E0209 18:41:16.831893 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:17.213480 kubelet[1897]: E0209 18:41:17.213446 1897 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.20.14\": Get \"https://10.200.20.17:6443/api/v1/nodes/10.200.20.14?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:41:17.213480 kubelet[1897]: E0209 18:41:17.213474 1897 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 9 18:41:17.832821 kubelet[1897]: E0209 18:41:17.832792 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:17.989657 kubelet[1897]: E0209 18:41:17.989620 1897 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.14?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:41:17.989823 kubelet[1897]: I0209 18:41:17.989681 1897 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 9 18:41:18.833688 kubelet[1897]: E0209 18:41:18.833657 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:19.833845 kubelet[1897]: E0209 18:41:19.833818 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:20.834508 kubelet[1897]: E0209 18:41:20.834472 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:20.966802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:20.982952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:20.999796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.017400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.034017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.051362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.051517 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.072022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.072166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.092704 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.092935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.113034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.113344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.133347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.133542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.153779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.153966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.173669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.173849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.197548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.197725 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.218339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.218541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.239274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.239469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.259786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.259984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.280616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.280905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.300498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.300670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.320637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.320824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.341099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.341288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.361479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.361655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.381687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.381926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.402114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.402289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.421824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.421998 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.442098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.442298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.461890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.462070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.481600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.481804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.501700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.501914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.521661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.521862 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.541919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.542088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.561560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.561724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.580960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.581118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.599992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.600174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.619752 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.619963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.639516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.639690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.649431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.668617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.668896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.688087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.688264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.721145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.721347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.741790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.751985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.752091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.771492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.771690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.800874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.801122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.801230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.825300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.825554 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.826813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.835179 kubelet[1897]: E0209 18:41:21.835127 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:21.846271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.846432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.865766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.865941 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.885855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.886058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.895625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.905295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.933991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.934191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.934310 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.943708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.962996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.963164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.981722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:21.981920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.001421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.001649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.021818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.022025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.042361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.042546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.063013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.063198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.083326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.083553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.103877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.104123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.126438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.126649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.147606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.147817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.158749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.179312 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.179507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.189487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.211016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.211285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.231382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.231585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.241862 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.262014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.262239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.272420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.293060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.293293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.313160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.313369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.333335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.333524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.353722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.354031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.380386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.380615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.412662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.412915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.434416 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.434597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.456572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.456786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.478226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.478453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.499085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.499370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.520834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.521037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.542569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.542768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.564116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.564302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.585240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.585445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.605646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.605859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.626804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.626995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.647343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.647551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.668063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.668263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.689761 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.689977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.711785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.712050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.732977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.733229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.753565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.753807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.774088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.774360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.794507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.794721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.814891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.815147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.835595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.835819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.835944 kubelet[1897]: E0209 18:41:22.835541 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:22.856445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.856673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.876849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.877085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.897942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.898169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.927693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.927924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.928027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.937804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.962942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.963157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.985402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:22.985607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.007322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.007558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.028364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.028566 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.060024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.060268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.060384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.080461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.080641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.092801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.115463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.115755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.135762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.135964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.166872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.167119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.167226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.177091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.187685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.207662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.207901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.217615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.238390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.238612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.259893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.260099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.280333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.280528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.301190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.301401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.331950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.332152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.332270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.342366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.352627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.363222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.373810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.394471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.395662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.404866 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.436418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.436761 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.436917 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.446796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.466869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.467083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.488722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.488965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.499516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.519944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.520167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.540229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.540430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.565268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.565513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.579345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.600033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.600244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.610391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.620892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.641820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.642051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.661931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.662179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.682102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.682351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.702742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.702972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.723430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.723630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.731770 kubelet[1897]: E0209 18:41:23.731724 1897 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:23.742927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.743139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.762951 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.763119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.783718 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.783938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.804050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.804265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.823420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.823640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.836644 kubelet[1897]: E0209 18:41:23.836601 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:23.843763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.843975 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.864440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.864661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.883883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.884121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.903909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.904116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.924158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.924340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.944327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.944498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.964901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.965095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.984605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:23.984840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.004983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.005197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.025102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.025324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.046031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.046247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.065939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.066151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.085686 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.085912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.105738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.106030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.125919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.126115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.148927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.149147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.175200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.175464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.199959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.200180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.222886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.223145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.245909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.246121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.267368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.267559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.288213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.288411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.309061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.309240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.330470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.330673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.351826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.352041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.361840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.382545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.382758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.392883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.413529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.413701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.432531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.432698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.453734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.454006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.474104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.474342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.493810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.494021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.514017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.514233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.534189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.534350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.554998 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.555186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.575391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.575566 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.595692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.595890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.605863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.625960 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.626137 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.636260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.657025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.657216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.677915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.678082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.697845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.698039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.718482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.718662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.738722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.738938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.759860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.760091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.780354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.780579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.800570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.800814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.820932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.821119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.837091 kubelet[1897]: E0209 18:41:24.837040 1897 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:41:24.841028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.841202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.861688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.861885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.883865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.884092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.904623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.904910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.925699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.925918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.947313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.947501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.967881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.968071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.989084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#287 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:41:24.989316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#186 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001