Feb 9 18:31:24.002656 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:31:24.002674 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:31:24.002681 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 18:31:24.002689 kernel: printk: bootconsole [pl11] enabled Feb 9 18:31:24.002694 kernel: efi: EFI v2.70 by EDK II Feb 9 18:31:24.002699 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 18:31:24.002705 kernel: random: crng init done Feb 9 18:31:24.002711 kernel: ACPI: Early table checksum verification disabled Feb 9 18:31:24.002716 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 18:31:24.002721 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002727 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002733 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 18:31:24.002739 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002744 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002751 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002757 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002762 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002769 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002775 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 18:31:24.002781 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:24.002786 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 18:31:24.002792 kernel: NUMA: Failed to initialise from firmware Feb 9 18:31:24.002798 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:31:24.002803 kernel: NUMA: NODE_DATA [mem 0x1bf7f0900-0x1bf7f5fff] Feb 9 18:31:24.002809 kernel: Zone ranges: Feb 9 18:31:24.002815 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 18:31:24.002820 kernel: DMA32 empty Feb 9 18:31:24.002827 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:31:24.002832 kernel: Movable zone start for each node Feb 9 18:31:24.002838 kernel: Early memory node ranges Feb 9 18:31:24.002843 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 18:31:24.002849 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 18:31:24.002855 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 18:31:24.002860 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 18:31:24.002866 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 18:31:24.002871 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 18:31:24.002877 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 18:31:24.002882 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 18:31:24.002888 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:31:24.002920 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:31:24.002929 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 18:31:24.002935 kernel: psci: probing for conduit method from ACPI. Feb 9 18:31:24.002941 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:31:24.002947 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:31:24.002955 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 18:31:24.002961 kernel: psci: SMC Calling Convention v1.4 Feb 9 18:31:24.002967 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 18:31:24.002973 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 18:31:24.002979 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:31:24.002985 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:31:24.002991 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 18:31:24.002997 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:31:24.003003 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:31:24.003009 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:31:24.003015 kernel: CPU features: detected: Spectre-BHB Feb 9 18:31:24.003021 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:31:24.003029 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:31:24.003035 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:31:24.003041 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 18:31:24.003047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 18:31:24.003053 kernel: Policy zone: Normal Feb 9 18:31:24.003060 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:31:24.003067 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:31:24.003073 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:31:24.003079 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:31:24.003085 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:31:24.003092 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 18:31:24.003098 kernel: Memory: 3991928K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202232K reserved, 0K cma-reserved) Feb 9 18:31:24.003105 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:31:24.003110 kernel: trace event string verifier disabled Feb 9 18:31:24.003116 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:31:24.003123 kernel: rcu: RCU event tracing is enabled. Feb 9 18:31:24.003129 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:31:24.003135 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:31:24.003141 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:31:24.003147 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:31:24.003153 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:31:24.003160 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:31:24.003166 kernel: GICv3: 960 SPIs implemented Feb 9 18:31:24.003172 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:31:24.003178 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:31:24.003184 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:31:24.003190 kernel: GICv3: 16 PPIs implemented Feb 9 18:31:24.003196 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 18:31:24.003202 kernel: ITS: No ITS available, not enabling LPIs Feb 9 18:31:24.003208 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:31:24.003214 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:31:24.003220 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:31:24.003226 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:31:24.003234 kernel: Console: colour dummy device 80x25 Feb 9 18:31:24.003240 kernel: printk: console [tty1] enabled Feb 9 18:31:24.003246 kernel: ACPI: Core revision 20210730 Feb 9 18:31:24.003253 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:31:24.003259 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:31:24.003265 kernel: LSM: Security Framework initializing Feb 9 18:31:24.003271 kernel: SELinux: Initializing. Feb 9 18:31:24.003278 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:31:24.003284 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:31:24.003291 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 18:31:24.003298 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 18:31:24.003304 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:31:24.003310 kernel: Remapping and enabling EFI services. Feb 9 18:31:24.003316 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:31:24.003322 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:31:24.003329 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 18:31:24.003335 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:31:24.003341 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:31:24.003348 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:31:24.003355 kernel: SMP: Total of 2 processors activated. Feb 9 18:31:24.003361 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:31:24.003367 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 18:31:24.003374 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:31:24.003380 kernel: CPU features: detected: CRC32 instructions Feb 9 18:31:24.003386 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:31:24.003392 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:31:24.003398 kernel: CPU features: detected: Privileged Access Never Feb 9 18:31:24.003406 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:31:24.003412 kernel: alternatives: patching kernel code Feb 9 18:31:24.003423 kernel: devtmpfs: initialized Feb 9 18:31:24.003431 kernel: KASLR enabled Feb 9 18:31:24.003437 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:31:24.003444 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:31:24.003450 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:31:24.007922 kernel: SMBIOS 3.1.0 present. Feb 9 18:31:24.007947 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 18:31:24.007956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:31:24.007967 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:31:24.007974 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:31:24.007981 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:31:24.007989 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:31:24.007996 kernel: audit: type=2000 audit(0.086:1): state=initialized audit_enabled=0 res=1 Feb 9 18:31:24.008002 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:31:24.008009 kernel: cpuidle: using governor menu Feb 9 18:31:24.008017 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:31:24.008024 kernel: ASID allocator initialised with 32768 entries Feb 9 18:31:24.008031 kernel: ACPI: bus type PCI registered Feb 9 18:31:24.008037 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:31:24.008044 kernel: Serial: AMBA PL011 UART driver Feb 9 18:31:24.008051 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:31:24.008057 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:31:24.008064 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:31:24.008071 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:31:24.008079 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:31:24.008085 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:31:24.008092 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:31:24.008099 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:31:24.008105 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:31:24.008112 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:31:24.008118 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:31:24.008125 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:31:24.008131 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:31:24.008139 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:31:24.008146 kernel: ACPI: Interpreter enabled Feb 9 18:31:24.008152 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:31:24.008159 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:31:24.008166 kernel: printk: console [ttyAMA0] enabled Feb 9 18:31:24.008173 kernel: printk: bootconsole [pl11] disabled Feb 9 18:31:24.008179 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 18:31:24.008186 kernel: iommu: Default domain type: Translated Feb 9 18:31:24.008193 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:31:24.008200 kernel: vgaarb: loaded Feb 9 18:31:24.008207 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:31:24.008214 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:31:24.008221 kernel: PTP clock support registered Feb 9 18:31:24.008227 kernel: Registered efivars operations Feb 9 18:31:24.008234 kernel: No ACPI PMU IRQ for CPU0 Feb 9 18:31:24.008241 kernel: No ACPI PMU IRQ for CPU1 Feb 9 18:31:24.008247 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:31:24.008254 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:31:24.008262 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:31:24.008269 kernel: pnp: PnP ACPI init Feb 9 18:31:24.008275 kernel: pnp: PnP ACPI: found 0 devices Feb 9 18:31:24.008282 kernel: NET: Registered PF_INET protocol family Feb 9 18:31:24.008289 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:31:24.008295 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:31:24.008302 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:31:24.008309 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:31:24.008316 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:31:24.008325 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:31:24.008332 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:31:24.008339 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:31:24.008345 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:31:24.008352 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:31:24.008359 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 18:31:24.008365 kernel: kvm [1]: HYP mode not available Feb 9 18:31:24.008372 kernel: Initialise system trusted keyrings Feb 9 18:31:24.008378 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:31:24.008386 kernel: Key type asymmetric registered Feb 9 18:31:24.008392 kernel: Asymmetric key parser 'x509' registered Feb 9 18:31:24.008399 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:31:24.008410 kernel: io scheduler mq-deadline registered Feb 9 18:31:24.008418 kernel: io scheduler kyber registered Feb 9 18:31:24.008425 kernel: io scheduler bfq registered Feb 9 18:31:24.008431 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:31:24.008438 kernel: thunder_xcv, ver 1.0 Feb 9 18:31:24.008444 kernel: thunder_bgx, ver 1.0 Feb 9 18:31:24.008453 kernel: nicpf, ver 1.0 Feb 9 18:31:24.008462 kernel: nicvf, ver 1.0 Feb 9 18:31:24.008593 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:31:24.008664 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:31:23 UTC (1707503483) Feb 9 18:31:24.008675 kernel: efifb: probing for efifb Feb 9 18:31:24.008682 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 18:31:24.008689 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 18:31:24.008695 kernel: efifb: scrolling: redraw Feb 9 18:31:24.008704 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 18:31:24.008711 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:31:24.008722 kernel: fb0: EFI VGA frame buffer device Feb 9 18:31:24.008729 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 18:31:24.008735 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:31:24.008742 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:31:24.008749 kernel: Segment Routing with IPv6 Feb 9 18:31:24.008755 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:31:24.008765 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:31:24.008773 kernel: Key type dns_resolver registered Feb 9 18:31:24.008779 kernel: registered taskstats version 1 Feb 9 18:31:24.008786 kernel: Loading compiled-in X.509 certificates Feb 9 18:31:24.008793 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:31:24.008799 kernel: Key type .fscrypt registered Feb 9 18:31:24.008806 kernel: Key type fscrypt-provisioning registered Feb 9 18:31:24.008815 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:31:24.008822 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:31:24.008828 kernel: ima: No architecture policies found Feb 9 18:31:24.008836 kernel: Freeing unused kernel memory: 34688K Feb 9 18:31:24.008843 kernel: Run /init as init process Feb 9 18:31:24.008849 kernel: with arguments: Feb 9 18:31:24.008856 kernel: /init Feb 9 18:31:24.008865 kernel: with environment: Feb 9 18:31:24.008871 kernel: HOME=/ Feb 9 18:31:24.008878 kernel: TERM=linux Feb 9 18:31:24.008884 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:31:24.008902 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:31:24.008914 systemd[1]: Detected virtualization microsoft. Feb 9 18:31:24.008924 systemd[1]: Detected architecture arm64. Feb 9 18:31:24.008931 systemd[1]: Running in initrd. Feb 9 18:31:24.008938 systemd[1]: No hostname configured, using default hostname. Feb 9 18:31:24.008945 systemd[1]: Hostname set to . Feb 9 18:31:24.008953 systemd[1]: Initializing machine ID from random generator. Feb 9 18:31:24.008960 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:31:24.008968 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:31:24.008975 systemd[1]: Reached target cryptsetup.target. Feb 9 18:31:24.008982 systemd[1]: Reached target paths.target. Feb 9 18:31:24.008989 systemd[1]: Reached target slices.target. Feb 9 18:31:24.008999 systemd[1]: Reached target swap.target. Feb 9 18:31:24.009006 systemd[1]: Reached target timers.target. Feb 9 18:31:24.009013 systemd[1]: Listening on iscsid.socket. Feb 9 18:31:24.009020 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:31:24.009029 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:31:24.009036 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:31:24.009043 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:31:24.009053 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:31:24.009060 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:31:24.009067 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:31:24.009074 systemd[1]: Reached target sockets.target. Feb 9 18:31:24.009081 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:31:24.009088 systemd[1]: Finished network-cleanup.service. Feb 9 18:31:24.009096 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:31:24.009106 systemd[1]: Starting systemd-journald.service... Feb 9 18:31:24.009113 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:31:24.009120 systemd[1]: Starting systemd-resolved.service... Feb 9 18:31:24.009126 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:31:24.009137 systemd-journald[276]: Journal started Feb 9 18:31:24.009180 systemd-journald[276]: Runtime Journal (/run/log/journal/a7172eb089274e4c9c614b884f761ad8) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:31:23.997296 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 18:31:24.033911 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:31:24.033952 systemd[1]: Started systemd-journald.service. Feb 9 18:31:24.046875 kernel: Bridge firewalling registered Feb 9 18:31:24.046993 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 18:31:24.073231 kernel: audit: type=1130 audit(1707503484.050:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.048295 systemd-resolved[278]: Positive Trust Anchors: Feb 9 18:31:24.048302 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:31:24.139254 kernel: SCSI subsystem initialized Feb 9 18:31:24.139277 kernel: audit: type=1130 audit(1707503484.096:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.139287 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:31:24.139296 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:31:24.139304 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:31:24.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.048332 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:31:24.191991 kernel: audit: type=1130 audit(1707503484.142:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.050413 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 18:31:24.072839 systemd[1]: Started systemd-resolved.service. Feb 9 18:31:24.225263 kernel: audit: type=1130 audit(1707503484.196:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.113033 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:31:24.250483 kernel: audit: type=1130 audit(1707503484.205:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.141837 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 18:31:24.143228 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:31:24.283232 kernel: audit: type=1130 audit(1707503484.246:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.196934 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:31:24.220403 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:31:24.246718 systemd[1]: Reached target nss-lookup.target. Feb 9 18:31:24.255644 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:31:24.280156 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:31:24.352810 kernel: audit: type=1130 audit(1707503484.323:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.287890 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:31:24.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.302432 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:31:24.381727 kernel: audit: type=1130 audit(1707503484.357:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.325001 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:31:24.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.357819 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:31:24.406535 kernel: audit: type=1130 audit(1707503484.385:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.406041 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:31:24.420384 dracut-cmdline[297]: dracut-dracut-053 Feb 9 18:31:24.424213 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:31:24.478912 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:31:24.490924 kernel: iscsi: registered transport (tcp) Feb 9 18:31:24.509525 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:31:24.509569 kernel: QLogic iSCSI HBA Driver Feb 9 18:31:24.538794 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:31:24.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:24.544265 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:31:24.595912 kernel: raid6: neonx8 gen() 13816 MB/s Feb 9 18:31:24.615903 kernel: raid6: neonx8 xor() 10833 MB/s Feb 9 18:31:24.635907 kernel: raid6: neonx4 gen() 13555 MB/s Feb 9 18:31:24.656904 kernel: raid6: neonx4 xor() 11233 MB/s Feb 9 18:31:24.676903 kernel: raid6: neonx2 gen() 12963 MB/s Feb 9 18:31:24.696904 kernel: raid6: neonx2 xor() 10243 MB/s Feb 9 18:31:24.717904 kernel: raid6: neonx1 gen() 10513 MB/s Feb 9 18:31:24.737905 kernel: raid6: neonx1 xor() 8805 MB/s Feb 9 18:31:24.757905 kernel: raid6: int64x8 gen() 6298 MB/s Feb 9 18:31:24.778907 kernel: raid6: int64x8 xor() 3549 MB/s Feb 9 18:31:24.798905 kernel: raid6: int64x4 gen() 7220 MB/s Feb 9 18:31:24.818902 kernel: raid6: int64x4 xor() 3858 MB/s Feb 9 18:31:24.839902 kernel: raid6: int64x2 gen() 6153 MB/s Feb 9 18:31:24.859902 kernel: raid6: int64x2 xor() 3325 MB/s Feb 9 18:31:24.879905 kernel: raid6: int64x1 gen() 5050 MB/s Feb 9 18:31:24.904981 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 18:31:24.904992 kernel: raid6: using algorithm neonx8 gen() 13816 MB/s Feb 9 18:31:24.905001 kernel: raid6: .... xor() 10833 MB/s, rmw enabled Feb 9 18:31:24.909166 kernel: raid6: using neon recovery algorithm Feb 9 18:31:24.925905 kernel: xor: measuring software checksum speed Feb 9 18:31:24.929903 kernel: 8regs : 17319 MB/sec Feb 9 18:31:24.937969 kernel: 32regs : 20749 MB/sec Feb 9 18:31:24.937979 kernel: arm64_neon : 27978 MB/sec Feb 9 18:31:24.937987 kernel: xor: using function: arm64_neon (27978 MB/sec) Feb 9 18:31:24.997915 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:31:25.007088 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:31:25.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:25.015000 audit: BPF prog-id=7 op=LOAD Feb 9 18:31:25.015000 audit: BPF prog-id=8 op=LOAD Feb 9 18:31:25.015780 systemd[1]: Starting systemd-udevd.service... Feb 9 18:31:25.033074 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 9 18:31:25.039981 systemd[1]: Started systemd-udevd.service. Feb 9 18:31:25.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:25.050059 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:31:25.062544 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Feb 9 18:31:25.093324 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:31:25.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:25.098978 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:31:25.131652 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:31:25.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:25.188916 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 18:31:25.204943 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 18:31:25.204994 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 18:31:25.212424 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 9 18:31:25.220672 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 18:31:25.220815 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 18:31:25.237933 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 9 18:31:25.243079 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 18:31:25.255018 kernel: scsi host0: storvsc_host_t Feb 9 18:31:25.255164 kernel: scsi host1: storvsc_host_t Feb 9 18:31:25.255186 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 18:31:25.268933 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 18:31:25.295127 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 18:31:25.295306 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:31:25.303910 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 18:31:25.304078 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 18:31:25.304166 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 18:31:25.311080 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 18:31:25.317883 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 18:31:25.318022 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 18:31:25.324918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:31:25.329918 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 18:31:25.338913 kernel: hv_netvsc 0022487b-5d7f-0022-487b-5d7f0022487b eth0: VF slot 1 added Feb 9 18:31:25.354479 kernel: hv_vmbus: registering driver hv_pci Feb 9 18:31:25.354534 kernel: hv_pci 713756ec-afed-4dee-a256-0d669d64669e: PCI VMBus probing: Using version 0x10004 Feb 9 18:31:25.371419 kernel: hv_pci 713756ec-afed-4dee-a256-0d669d64669e: PCI host bridge to bus afed:00 Feb 9 18:31:25.371649 kernel: pci_bus afed:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 18:31:25.371758 kernel: pci_bus afed:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 18:31:25.387136 kernel: pci afed:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 18:31:25.399056 kernel: pci afed:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:31:25.421922 kernel: pci afed:00:02.0: enabling Extended Tags Feb 9 18:31:25.446716 kernel: pci afed:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at afed:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 18:31:25.446933 kernel: pci_bus afed:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 18:31:25.453158 kernel: pci afed:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:31:25.493928 kernel: mlx5_core afed:00:02.0: firmware version: 16.30.1284 Feb 9 18:31:25.648913 kernel: mlx5_core afed:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 18:31:25.674522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:31:25.705917 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (540) Feb 9 18:31:25.720067 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:31:25.738345 kernel: hv_netvsc 0022487b-5d7f-0022-487b-5d7f0022487b eth0: VF registering: eth1 Feb 9 18:31:25.738584 kernel: mlx5_core afed:00:02.0 eth1: joined to eth0 Feb 9 18:31:25.748917 kernel: mlx5_core afed:00:02.0 enP45037s1: renamed from eth1 Feb 9 18:31:25.898499 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:31:25.904217 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:31:25.916304 systemd[1]: Starting disk-uuid.service... Feb 9 18:31:25.977155 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:31:26.946448 disk-uuid[597]: The operation has completed successfully. Feb 9 18:31:26.951559 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:31:27.010384 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:31:27.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.010472 systemd[1]: Finished disk-uuid.service. Feb 9 18:31:27.015529 systemd[1]: Starting verity-setup.service... Feb 9 18:31:27.068626 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:31:27.295553 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:31:27.305765 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:31:27.309481 systemd[1]: Finished verity-setup.service. Feb 9 18:31:27.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.369922 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:31:27.370234 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:31:27.374123 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:31:27.374839 systemd[1]: Starting ignition-setup.service... Feb 9 18:31:27.381604 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:31:27.420344 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:31:27.420370 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:31:27.420379 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:31:27.475815 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:31:27.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.483000 audit: BPF prog-id=9 op=LOAD Feb 9 18:31:27.485231 systemd[1]: Starting systemd-networkd.service... Feb 9 18:31:27.495941 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:31:27.511693 systemd-networkd[867]: lo: Link UP Feb 9 18:31:27.511706 systemd-networkd[867]: lo: Gained carrier Feb 9 18:31:27.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.512103 systemd-networkd[867]: Enumeration completed Feb 9 18:31:27.512171 systemd[1]: Started systemd-networkd.service. Feb 9 18:31:27.520012 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:31:27.520889 systemd[1]: Reached target network.target. Feb 9 18:31:27.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.532747 systemd[1]: Starting iscsiuio.service... Feb 9 18:31:27.541488 systemd[1]: Started iscsiuio.service. Feb 9 18:31:27.553256 systemd[1]: Starting iscsid.service... Feb 9 18:31:27.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.579210 iscsid[876]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:31:27.579210 iscsid[876]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 18:31:27.579210 iscsid[876]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:31:27.579210 iscsid[876]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:31:27.579210 iscsid[876]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:31:27.579210 iscsid[876]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:31:27.579210 iscsid[876]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:31:27.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.570209 systemd[1]: Started iscsid.service. Feb 9 18:31:27.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.578679 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:31:27.591246 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:31:27.595888 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:31:27.618098 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:31:27.636329 systemd[1]: Reached target remote-fs.target. Feb 9 18:31:27.652109 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:31:27.672439 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:31:27.730919 kernel: mlx5_core afed:00:02.0 enP45037s1: Link up Feb 9 18:31:27.746246 systemd[1]: Finished ignition-setup.service. Feb 9 18:31:27.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:27.751673 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:31:27.778795 kernel: hv_netvsc 0022487b-5d7f-0022-487b-5d7f0022487b eth0: Data path switched to VF: enP45037s1 Feb 9 18:31:27.778983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:31:27.779173 systemd-networkd[867]: enP45037s1: Link UP Feb 9 18:31:27.779364 systemd-networkd[867]: eth0: Link UP Feb 9 18:31:27.779732 systemd-networkd[867]: eth0: Gained carrier Feb 9 18:31:27.787293 systemd-networkd[867]: enP45037s1: Gained carrier Feb 9 18:31:27.801979 systemd-networkd[867]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:31:29.417018 systemd-networkd[867]: eth0: Gained IPv6LL Feb 9 18:31:30.702936 ignition[891]: Ignition 2.14.0 Feb 9 18:31:30.702949 ignition[891]: Stage: fetch-offline Feb 9 18:31:30.703005 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:31:30.703028 ignition[891]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:31:30.820512 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:31:30.820651 ignition[891]: parsed url from cmdline: "" Feb 9 18:31:30.820655 ignition[891]: no config URL provided Feb 9 18:31:30.820660 ignition[891]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:31:30.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:30.828689 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:31:30.869235 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 18:31:30.869259 kernel: audit: type=1130 audit(1707503490.836:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:30.820668 ignition[891]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:31:30.848801 systemd[1]: Starting ignition-fetch.service... Feb 9 18:31:30.820674 ignition[891]: failed to fetch config: resource requires networking Feb 9 18:31:30.820773 ignition[891]: Ignition finished successfully Feb 9 18:31:30.855108 ignition[898]: Ignition 2.14.0 Feb 9 18:31:30.855114 ignition[898]: Stage: fetch Feb 9 18:31:30.855276 ignition[898]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:31:30.855303 ignition[898]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:31:30.857926 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:31:30.858041 ignition[898]: parsed url from cmdline: "" Feb 9 18:31:30.858049 ignition[898]: no config URL provided Feb 9 18:31:30.858054 ignition[898]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:31:30.858062 ignition[898]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:31:30.858089 ignition[898]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 18:31:30.964628 ignition[898]: GET result: OK Feb 9 18:31:30.964707 ignition[898]: config has been read from IMDS userdata Feb 9 18:31:30.964770 ignition[898]: parsing config with SHA512: 00804c22511aeac1162bd361a619b17cef840c9665def880eb3cdb94b41b7e22ad8adc06a9a2aad331b6e048762932329c63c1222f7fab8a7a14409d23baec91 Feb 9 18:31:30.998293 unknown[898]: fetched base config from "system" Feb 9 18:31:31.001950 unknown[898]: fetched base config from "system" Feb 9 18:31:31.001963 unknown[898]: fetched user config from "azure" Feb 9 18:31:31.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.002677 ignition[898]: fetch: fetch complete Feb 9 18:31:31.039704 kernel: audit: type=1130 audit(1707503491.011:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.007731 systemd[1]: Finished ignition-fetch.service. Feb 9 18:31:31.002689 ignition[898]: fetch: fetch passed Feb 9 18:31:31.032299 systemd[1]: Starting ignition-kargs.service... Feb 9 18:31:31.002731 ignition[898]: Ignition finished successfully Feb 9 18:31:31.078967 kernel: audit: type=1130 audit(1707503491.056:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.052709 systemd[1]: Finished ignition-kargs.service. Feb 9 18:31:31.043228 ignition[904]: Ignition 2.14.0 Feb 9 18:31:31.057884 systemd[1]: Starting ignition-disks.service... Feb 9 18:31:31.043233 ignition[904]: Stage: kargs Feb 9 18:31:31.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.088182 systemd[1]: Finished ignition-disks.service. Feb 9 18:31:31.127856 kernel: audit: type=1130 audit(1707503491.092:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.043334 ignition[904]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:31:31.092936 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:31:31.043352 ignition[904]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:31:31.116798 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:31:31.045948 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:31:31.124101 systemd[1]: Reached target local-fs.target. Feb 9 18:31:31.048140 ignition[904]: kargs: kargs passed Feb 9 18:31:31.132001 systemd[1]: Reached target sysinit.target. Feb 9 18:31:31.048185 ignition[904]: Ignition finished successfully Feb 9 18:31:31.142284 systemd[1]: Reached target basic.target. Feb 9 18:31:31.067020 ignition[910]: Ignition 2.14.0 Feb 9 18:31:31.151681 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:31:31.067026 ignition[910]: Stage: disks Feb 9 18:31:31.067130 ignition[910]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:31:31.067149 ignition[910]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:31:31.069623 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:31:31.085621 ignition[910]: disks: disks passed Feb 9 18:31:31.085674 ignition[910]: Ignition finished successfully Feb 9 18:31:31.250639 systemd-fsck[918]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 18:31:31.267053 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:31:31.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.292322 systemd[1]: Mounting sysroot.mount... Feb 9 18:31:31.299383 kernel: audit: type=1130 audit(1707503491.275:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.313745 systemd[1]: Mounted sysroot.mount. Feb 9 18:31:31.320575 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:31:31.317524 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:31:31.355249 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:31:31.360069 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 18:31:31.367047 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:31:31.367082 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:31:31.372747 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:31:31.425870 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:31:31.430767 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:31:31.454905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (929) Feb 9 18:31:31.463145 initrd-setup-root[934]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:31:31.478320 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:31:31.478338 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:31:31.478347 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:31:31.483485 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:31:31.493739 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:31:31.502609 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:31:31.524148 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:31:31.966557 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:31:31.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.994908 kernel: audit: type=1130 audit(1707503491.971:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:31.972589 systemd[1]: Starting ignition-mount.service... Feb 9 18:31:31.996525 systemd[1]: Starting sysroot-boot.service... Feb 9 18:31:32.009667 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:31:32.009962 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:31:32.030430 ignition[996]: INFO : Ignition 2.14.0 Feb 9 18:31:32.034574 ignition[996]: INFO : Stage: mount Feb 9 18:31:32.039274 ignition[996]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:31:32.039274 ignition[996]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:31:32.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:32.049196 systemd[1]: Finished sysroot-boot.service. Feb 9 18:31:32.106261 kernel: audit: type=1130 audit(1707503492.054:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:32.106283 kernel: audit: type=1130 audit(1707503492.088:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:32.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:32.106324 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:31:32.106324 ignition[996]: INFO : mount: mount passed Feb 9 18:31:32.106324 ignition[996]: INFO : Ignition finished successfully Feb 9 18:31:32.056073 systemd[1]: Finished ignition-mount.service. Feb 9 18:31:32.707929 coreos-metadata[928]: Feb 09 18:31:32.707 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 18:31:32.718154 coreos-metadata[928]: Feb 09 18:31:32.718 INFO Fetch successful Feb 9 18:31:32.750599 coreos-metadata[928]: Feb 09 18:31:32.750 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 18:31:32.764723 coreos-metadata[928]: Feb 09 18:31:32.764 INFO Fetch successful Feb 9 18:31:32.771035 coreos-metadata[928]: Feb 09 18:31:32.770 INFO wrote hostname ci-3510.3.2-a-de7ead93d8 to /sysroot/etc/hostname Feb 9 18:31:32.772103 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 18:31:32.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:32.807011 systemd[1]: Starting ignition-files.service... Feb 9 18:31:32.815235 kernel: audit: type=1130 audit(1707503492.784:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:32.818370 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:31:32.836954 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1007) Feb 9 18:31:32.848860 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:31:32.848876 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:31:32.848886 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:31:32.858229 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:31:32.875067 ignition[1026]: INFO : Ignition 2.14.0 Feb 9 18:31:32.875067 ignition[1026]: INFO : Stage: files Feb 9 18:31:32.886102 ignition[1026]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:31:32.886102 ignition[1026]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:31:32.886102 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:31:32.886102 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:31:32.919644 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:31:32.919644 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:31:32.971566 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:31:32.979363 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:31:32.987663 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:31:32.987663 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:31:32.987663 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:31:32.986367 unknown[1026]: wrote ssh authorized keys file for user: core Feb 9 18:31:33.537888 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:31:33.700501 ignition[1026]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 18:31:33.717114 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 18:31:33.717114 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:31:33.717114 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:31:33.969780 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:31:34.221289 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:31:34.231991 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:31:34.231991 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 18:31:34.686536 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:31:34.952883 ignition[1026]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 18:31:34.969051 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 18:31:34.969051 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:31:34.969051 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:31:35.341456 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:31:41.143139 ignition[1026]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 6a5c9c02a29126949f096415bb1761a0c0ad44168e2ab3d0409982701da58f96223bec354828ddf958e945ef1ce63c0ad41e77cbcbcce0756163e71b4fbae432 Feb 9 18:31:41.159202 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:31:41.159202 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:31:41.159202 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:31:41.433645 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:31:48.705502 ignition[1026]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 18:31:48.721959 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:31:48.721959 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:31:48.721959 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:31:49.051962 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:32:03.905851 ignition[1026]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 18:32:03.922226 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:32:03.922226 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:32:03.922226 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:32:03.922226 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:32:03.922226 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 18:32:04.401324 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 18:32:04.475656 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:32:04.486220 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:32:04.639463 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1029) Feb 9 18:32:04.639486 kernel: audit: type=1130 audit(1707503524.586:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116937145" Feb 9 18:32:04.639541 ignition[1026]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116937145": device or resource busy Feb 9 18:32:04.639541 ignition[1026]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1116937145", trying btrfs: device or resource busy Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116937145" Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1116937145" Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1116937145" Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1116937145" Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2355263133" Feb 9 18:32:04.639541 ignition[1026]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2355263133": device or resource busy Feb 9 18:32:04.639541 ignition[1026]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2355263133", trying btrfs: device or resource busy Feb 9 18:32:04.639541 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2355263133" Feb 9 18:32:04.948064 kernel: audit: type=1130 audit(1707503524.659:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.948097 kernel: audit: type=1130 audit(1707503524.707:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.948113 kernel: audit: type=1131 audit(1707503524.707:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.948123 kernel: audit: type=1130 audit(1707503524.801:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.948132 kernel: audit: type=1131 audit(1707503524.801:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.948141 kernel: audit: type=1130 audit(1707503524.923:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.536625 systemd[1]: mnt-oem1116937145.mount: Deactivated successfully. Feb 9 18:32:04.956337 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2355263133" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2355263133" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2355263133" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1e): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:32:04.956337 ignition[1026]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:32:05.228447 kernel: audit: type=1131 audit(1707503525.025:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.228475 kernel: audit: type=1131 audit(1707503525.202:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.556193 systemd[1]: mnt-oem2355263133.mount: Deactivated successfully. Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(1e): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:32:05.236060 ignition[1026]: INFO : files: files passed Feb 9 18:32:05.236060 ignition[1026]: INFO : Ignition finished successfully Feb 9 18:32:05.430471 kernel: audit: type=1131 audit(1707503525.251:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.430738 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:32:04.573320 systemd[1]: Finished ignition-files.service. Feb 9 18:32:05.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.589837 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:32:04.628528 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:32:04.638403 systemd[1]: Starting ignition-quench.service... Feb 9 18:32:04.654842 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:32:04.689900 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:32:04.689983 systemd[1]: Finished ignition-quench.service. Feb 9 18:32:05.507539 ignition[1064]: INFO : Ignition 2.14.0 Feb 9 18:32:05.507539 ignition[1064]: INFO : Stage: umount Feb 9 18:32:05.507539 ignition[1064]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:05.507539 ignition[1064]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:05.507539 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:05.507539 ignition[1064]: INFO : umount: umount passed Feb 9 18:32:05.507539 ignition[1064]: INFO : Ignition finished successfully Feb 9 18:32:05.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.707715 systemd[1]: Reached target ignition-complete.target. Feb 9 18:32:04.766352 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:32:05.589000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:32:05.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.797359 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:32:05.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.797456 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:32:05.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.802287 systemd[1]: Reached target initrd-fs.target. Feb 9 18:32:05.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.839425 systemd[1]: Reached target initrd.target. Feb 9 18:32:04.853639 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:32:05.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.863652 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:32:04.918855 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:32:04.955732 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:32:04.978660 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:32:05.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.984989 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:32:05.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.999373 systemd[1]: Stopped target timers.target. Feb 9 18:32:05.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.014634 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:32:05.014701 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:32:05.716560 kernel: hv_netvsc 0022487b-5d7f-0022-487b-5d7f0022487b eth0: Data path switched from VF: enP45037s1 Feb 9 18:32:05.048950 systemd[1]: Stopped target initrd.target. Feb 9 18:32:05.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.059353 systemd[1]: Stopped target basic.target. Feb 9 18:32:05.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.069596 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:32:05.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.081680 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:32:05.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.097657 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:32:05.113087 systemd[1]: Stopped target remote-fs.target. Feb 9 18:32:05.124366 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:32:05.135722 systemd[1]: Stopped target sysinit.target. Feb 9 18:32:05.150254 systemd[1]: Stopped target local-fs.target. Feb 9 18:32:05.165082 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:32:05.176631 systemd[1]: Stopped target swap.target. Feb 9 18:32:05.187459 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:32:05.187525 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:32:05.230959 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:32:05.240237 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:32:05.240289 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:32:05.251863 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:32:05.251912 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:32:05.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.280402 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:32:05.280439 systemd[1]: Stopped ignition-files.service. Feb 9 18:32:05.291959 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 18:32:05.291996 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 18:32:05.304750 systemd[1]: Stopping ignition-mount.service... Feb 9 18:32:05.325580 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:32:05.336865 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:32:05.336956 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:32:05.351815 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:32:05.351860 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:32:05.365520 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:32:05.890291 iscsid[876]: iscsid shutting down. Feb 9 18:32:05.365623 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:32:05.377762 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:32:05.377848 systemd[1]: Stopped ignition-mount.service. Feb 9 18:32:05.390652 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:32:05.390697 systemd[1]: Stopped ignition-disks.service. Feb 9 18:32:05.404061 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:32:05.404097 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:32:05.417723 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:32:05.417760 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:32:05.425909 systemd[1]: Stopped target network.target. Feb 9 18:32:05.434513 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:32:05.434555 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:32:05.449862 systemd[1]: Stopped target paths.target. Feb 9 18:32:05.456406 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:32:05.463915 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:32:05.477787 systemd[1]: Stopped target slices.target. Feb 9 18:32:05.486067 systemd[1]: Stopped target sockets.target. Feb 9 18:32:05.495079 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:32:05.495119 systemd[1]: Closed iscsid.socket. Feb 9 18:32:05.503552 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:32:05.503579 systemd[1]: Closed iscsiuio.socket. Feb 9 18:32:05.511194 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:32:05.890907 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 18:32:05.511234 systemd[1]: Stopped ignition-setup.service. Feb 9 18:32:05.519307 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:32:05.526760 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:32:05.537938 systemd-networkd[867]: eth0: DHCPv6 lease lost Feb 9 18:32:05.890000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:32:05.539383 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:32:05.540134 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:32:05.540244 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:32:05.556853 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:32:05.557096 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:32:05.567964 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:32:05.568051 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:32:05.576073 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:32:05.576106 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:32:05.585375 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:32:05.585419 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:32:05.594825 systemd[1]: Stopping network-cleanup.service... Feb 9 18:32:05.602497 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:32:05.602554 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:32:05.607485 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:32:05.607530 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:32:05.620596 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:32:05.620639 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:32:05.625625 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:32:05.634801 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:32:05.635311 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:32:05.635436 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:32:05.642180 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:32:05.642226 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:32:05.656814 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:32:05.656852 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:32:05.665172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:32:05.665225 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:32:05.673378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:32:05.673417 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:32:05.681444 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:32:05.681483 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:32:05.693578 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:32:05.712487 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:32:05.712562 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:32:05.725218 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:32:05.725265 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:32:05.729784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:32:05.729820 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:32:05.739025 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:32:05.739486 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:32:05.739579 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:32:05.806227 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:32:05.806333 systemd[1]: Stopped network-cleanup.service. Feb 9 18:32:05.816494 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:32:05.826966 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:32:05.844466 systemd[1]: Switching root. Feb 9 18:32:05.892209 systemd-journald[276]: Journal stopped Feb 9 18:32:17.437100 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:32:17.437120 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:32:17.437130 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:32:17.437140 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:32:17.437148 kernel: SELinux: policy capability open_perms=1 Feb 9 18:32:17.437156 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:32:17.437165 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:32:17.437173 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:32:17.437181 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:32:17.437188 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:32:17.437197 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:32:17.437207 systemd[1]: Successfully loaded SELinux policy in 326.535ms. Feb 9 18:32:17.437217 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.642ms. Feb 9 18:32:17.437227 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:32:17.437239 systemd[1]: Detected virtualization microsoft. Feb 9 18:32:17.437248 systemd[1]: Detected architecture arm64. Feb 9 18:32:17.437258 systemd[1]: Detected first boot. Feb 9 18:32:17.437267 systemd[1]: Hostname set to . Feb 9 18:32:17.437276 systemd[1]: Initializing machine ID from random generator. Feb 9 18:32:17.437285 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:32:17.437293 kernel: kauditd_printk_skb: 39 callbacks suppressed Feb 9 18:32:17.437303 kernel: audit: type=1400 audit(1707503530.152:87): avc: denied { associate } for pid=1097 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:32:17.437314 kernel: audit: type=1300 audit(1707503530.152:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022814 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:17.437324 kernel: audit: type=1327 audit(1707503530.152:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:17.437333 kernel: audit: type=1400 audit(1707503530.166:88): avc: denied { associate } for pid=1097 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:32:17.437343 kernel: audit: type=1300 audit(1707503530.166:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228f9 a2=1ed a3=0 items=2 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:17.437352 kernel: audit: type=1307 audit(1707503530.166:88): cwd="/" Feb 9 18:32:17.437362 kernel: audit: type=1302 audit(1707503530.166:88): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:17.437371 kernel: audit: type=1302 audit(1707503530.166:88): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:17.437380 kernel: audit: type=1327 audit(1707503530.166:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:17.437389 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:32:17.437398 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:32:17.437408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:32:17.437417 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:32:17.437428 kernel: audit: type=1334 audit(1707503536.722:89): prog-id=12 op=LOAD Feb 9 18:32:17.437436 kernel: audit: type=1334 audit(1707503536.722:90): prog-id=3 op=UNLOAD Feb 9 18:32:17.437445 kernel: audit: type=1334 audit(1707503536.729:91): prog-id=13 op=LOAD Feb 9 18:32:17.437454 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:32:17.437463 kernel: audit: type=1334 audit(1707503536.735:92): prog-id=14 op=LOAD Feb 9 18:32:17.437472 kernel: audit: type=1334 audit(1707503536.735:93): prog-id=4 op=UNLOAD Feb 9 18:32:17.437482 kernel: audit: type=1334 audit(1707503536.735:94): prog-id=5 op=UNLOAD Feb 9 18:32:17.437492 kernel: audit: type=1131 audit(1707503536.736:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.437502 systemd[1]: Stopped iscsiuio.service. Feb 9 18:32:17.437511 kernel: audit: type=1334 audit(1707503536.790:96): prog-id=12 op=UNLOAD Feb 9 18:32:17.437520 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:32:17.437530 kernel: audit: type=1131 audit(1707503536.796:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.437539 systemd[1]: Stopped iscsid.service. Feb 9 18:32:17.437548 kernel: audit: type=1131 audit(1707503536.826:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.437559 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:32:17.437568 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:32:17.437577 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:32:17.437587 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:32:17.437596 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:32:17.437605 systemd[1]: Created slice system-getty.slice. Feb 9 18:32:17.437614 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:32:17.437624 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:32:17.437633 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:32:17.437644 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:32:17.437654 systemd[1]: Created slice user.slice. Feb 9 18:32:17.437663 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:32:17.437672 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:32:17.437681 systemd[1]: Set up automount boot.automount. Feb 9 18:32:17.437691 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:32:17.437700 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:32:17.437709 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:32:17.437719 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:32:17.437729 systemd[1]: Reached target integritysetup.target. Feb 9 18:32:17.437738 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:32:17.437747 systemd[1]: Reached target remote-fs.target. Feb 9 18:32:17.437756 systemd[1]: Reached target slices.target. Feb 9 18:32:17.437765 systemd[1]: Reached target swap.target. Feb 9 18:32:17.437774 systemd[1]: Reached target torcx.target. Feb 9 18:32:17.437785 systemd[1]: Reached target veritysetup.target. Feb 9 18:32:17.437795 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:32:17.437804 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:32:17.437814 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:32:17.437823 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:32:17.437833 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:32:17.437843 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:32:17.437853 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:32:17.437863 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:32:17.437872 systemd[1]: Mounting media.mount... Feb 9 18:32:17.437881 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:32:17.437890 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:32:17.437906 systemd[1]: Mounting tmp.mount... Feb 9 18:32:17.437916 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:32:17.437925 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:32:17.437936 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:32:17.437945 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:32:17.437954 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:32:17.437964 systemd[1]: Starting modprobe@drm.service... Feb 9 18:32:17.437973 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:32:17.437982 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:32:17.437991 systemd[1]: Starting modprobe@loop.service... Feb 9 18:32:17.438001 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:32:17.438011 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:32:17.438022 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:32:17.438031 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:32:17.438041 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:32:17.438050 systemd[1]: Stopped systemd-journald.service. Feb 9 18:32:17.438061 systemd[1]: systemd-journald.service: Consumed 3.159s CPU time. Feb 9 18:32:17.438070 kernel: fuse: init (API version 7.34) Feb 9 18:32:17.438079 systemd[1]: Starting systemd-journald.service... Feb 9 18:32:17.438088 kernel: loop: module loaded Feb 9 18:32:17.438097 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:32:17.438107 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:32:17.438116 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:32:17.438126 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:32:17.438135 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:32:17.438144 systemd[1]: Stopped verity-setup.service. Feb 9 18:32:17.438154 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:32:17.438166 systemd-journald[1203]: Journal started Feb 9 18:32:17.438203 systemd-journald[1203]: Runtime Journal (/run/log/journal/dcb092c5bdbc434e9435fc530dcdc672) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:32:08.139000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:32:08.807000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:32:08.807000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:32:08.807000 audit: BPF prog-id=10 op=LOAD Feb 9 18:32:08.807000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:32:08.807000 audit: BPF prog-id=11 op=LOAD Feb 9 18:32:08.807000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:32:10.152000 audit[1097]: AVC avc: denied { associate } for pid=1097 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:32:10.152000 audit[1097]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=4000022814 a1=4000028ac8 a2=4000026d00 a3=32 items=0 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:10.152000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:10.166000 audit[1097]: AVC avc: denied { associate } for pid=1097 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:32:10.166000 audit[1097]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40000228f9 a2=1ed a3=0 items=2 ppid=1080 pid=1097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:10.166000 audit: CWD cwd="/" Feb 9 18:32:10.166000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:10.166000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:10.166000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:16.722000 audit: BPF prog-id=12 op=LOAD Feb 9 18:32:16.722000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:32:16.729000 audit: BPF prog-id=13 op=LOAD Feb 9 18:32:16.735000 audit: BPF prog-id=14 op=LOAD Feb 9 18:32:16.735000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:32:16.735000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:32:16.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:16.790000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:32:16.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:16.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:16.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:16.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.349000 audit: BPF prog-id=15 op=LOAD Feb 9 18:32:17.349000 audit: BPF prog-id=16 op=LOAD Feb 9 18:32:17.349000 audit: BPF prog-id=17 op=LOAD Feb 9 18:32:17.349000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:32:17.349000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:32:17.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.434000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:32:17.434000 audit[1203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe90f2730 a2=4000 a3=1 items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:17.434000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:32:16.721385 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:32:10.097140 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:32:16.737154 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:32:10.123432 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:32:16.737513 systemd[1]: systemd-journald.service: Consumed 3.159s CPU time. Feb 9 18:32:10.123453 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:32:10.123490 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:32:10.123499 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:32:10.123536 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:32:10.123548 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:32:10.123747 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:32:10.123780 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:32:10.123792 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:32:10.137376 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:32:10.137410 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:32:10.137435 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:32:10.137449 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:32:10.137467 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:32:10.137481 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:32:15.638984 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:15Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:15.639248 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:15Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:15.639356 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:15Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:15.639513 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:15Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:15.639561 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:15Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:32:15.639618 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-02-09T18:32:15Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:32:17.454082 systemd[1]: Started systemd-journald.service. Feb 9 18:32:17.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.454827 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:32:17.458961 systemd[1]: Mounted media.mount. Feb 9 18:32:17.462661 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:32:17.467204 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:32:17.471810 systemd[1]: Mounted tmp.mount. Feb 9 18:32:17.475687 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:32:17.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.480366 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:32:17.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.485463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:32:17.485591 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:32:17.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.490514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:32:17.490649 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:32:17.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.495359 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:32:17.495477 systemd[1]: Finished modprobe@drm.service. Feb 9 18:32:17.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.500109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:32:17.500225 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:32:17.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.505159 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:32:17.505269 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:32:17.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.509920 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:32:17.510031 systemd[1]: Finished modprobe@loop.service. Feb 9 18:32:17.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.514661 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:32:17.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.519606 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:32:17.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.525072 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:32:17.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.530452 systemd[1]: Reached target network-pre.target. Feb 9 18:32:17.536007 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:32:17.541307 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:32:17.545464 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:32:17.551943 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:32:17.557056 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:32:17.561825 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:32:17.562827 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:32:17.567626 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:32:17.568612 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:32:17.574167 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:32:17.580162 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:32:17.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.585019 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:32:17.590048 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:32:17.595943 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:32:17.606055 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:32:17.619065 systemd-journald[1203]: Time spent on flushing to /var/log/journal/dcb092c5bdbc434e9435fc530dcdc672 is 14.376ms for 1128 entries. Feb 9 18:32:17.619065 systemd-journald[1203]: System Journal (/var/log/journal/dcb092c5bdbc434e9435fc530dcdc672) is 8.0M, max 2.6G, 2.6G free. Feb 9 18:32:17.685664 systemd-journald[1203]: Received client request to flush runtime journal. Feb 9 18:32:17.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:17.627140 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:32:17.632112 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:32:17.639702 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:32:17.686599 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:32:17.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:18.146668 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:32:18.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:18.152628 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:32:18.407286 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:32:18.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:18.706942 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:32:18.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:18.711000 audit: BPF prog-id=18 op=LOAD Feb 9 18:32:18.711000 audit: BPF prog-id=19 op=LOAD Feb 9 18:32:18.711000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:32:18.711000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:32:18.713251 systemd[1]: Starting systemd-udevd.service... Feb 9 18:32:18.731442 systemd-udevd[1222]: Using default interface naming scheme 'v252'. Feb 9 18:32:18.878246 systemd[1]: Started systemd-udevd.service. Feb 9 18:32:18.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:18.886000 audit: BPF prog-id=20 op=LOAD Feb 9 18:32:18.889609 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:18.921878 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:32:18.946000 audit: BPF prog-id=21 op=LOAD Feb 9 18:32:18.946000 audit: BPF prog-id=22 op=LOAD Feb 9 18:32:18.946000 audit: BPF prog-id=23 op=LOAD Feb 9 18:32:18.948047 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:32:18.991932 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:32:18.991000 audit[1231]: AVC avc: denied { confidentiality } for pid=1231 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:32:19.010922 kernel: hv_vmbus: registering driver hv_balloon Feb 9 18:32:19.011004 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 18:32:19.020172 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 18:32:19.027045 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 18:32:19.024633 systemd[1]: Started systemd-userdbd.service. Feb 9 18:32:19.035785 kernel: hv_vmbus: registering driver hv_utils Feb 9 18:32:19.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:19.051350 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 18:32:19.051426 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 18:32:19.051442 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 18:32:19.056096 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 18:32:19.063138 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 18:32:19.063155 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 18:32:19.147551 kernel: Console: switching to colour dummy device 80x25 Feb 9 18:32:19.155187 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:32:18.991000 audit[1231]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaacac54f40 a1=aa2c a2=ffff8ab824b0 a3=aaaacabb3010 items=12 ppid=1222 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:18.991000 audit: CWD cwd="/" Feb 9 18:32:18.991000 audit: PATH item=0 name=(null) inode=4052 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=1 name=(null) inode=10885 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=2 name=(null) inode=10885 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=3 name=(null) inode=10886 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=4 name=(null) inode=10885 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=5 name=(null) inode=10887 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=6 name=(null) inode=10885 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=7 name=(null) inode=10888 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=8 name=(null) inode=10885 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=9 name=(null) inode=10889 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=10 name=(null) inode=10885 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PATH item=11 name=(null) inode=10890 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.991000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:32:19.323627 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1240) Feb 9 18:32:19.339581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:32:19.347869 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:32:19.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:19.353805 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:32:19.359767 systemd-networkd[1243]: lo: Link UP Feb 9 18:32:19.360002 systemd-networkd[1243]: lo: Gained carrier Feb 9 18:32:19.360486 systemd-networkd[1243]: Enumeration completed Feb 9 18:32:19.360636 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:19.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:19.366256 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:32:19.387717 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:19.436447 kernel: mlx5_core afed:00:02.0 enP45037s1: Link up Feb 9 18:32:19.462402 systemd-networkd[1243]: enP45037s1: Link UP Feb 9 18:32:19.462540 kernel: hv_netvsc 0022487b-5d7f-0022-487b-5d7f0022487b eth0: Data path switched to VF: enP45037s1 Feb 9 18:32:19.462845 systemd-networkd[1243]: eth0: Link UP Feb 9 18:32:19.462907 systemd-networkd[1243]: eth0: Gained carrier Feb 9 18:32:19.467644 systemd-networkd[1243]: enP45037s1: Gained carrier Feb 9 18:32:19.477561 systemd-networkd[1243]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:19.619385 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:32:19.657324 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:32:19.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:19.662409 systemd[1]: Reached target cryptsetup.target. Feb 9 18:32:19.667910 systemd[1]: Starting lvm2-activation.service... Feb 9 18:32:19.671802 lvm[1302]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:32:19.694266 systemd[1]: Finished lvm2-activation.service. Feb 9 18:32:19.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:19.698890 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:32:19.703345 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:32:19.703371 systemd[1]: Reached target local-fs.target. Feb 9 18:32:19.707556 systemd[1]: Reached target machines.target. Feb 9 18:32:19.712978 systemd[1]: Starting ldconfig.service... Feb 9 18:32:19.716643 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:32:19.716709 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:19.717772 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:32:19.722776 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:32:19.729300 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:32:19.733918 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:32:19.733972 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:32:19.734965 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:32:19.773630 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:32:19.774817 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1304 (bootctl) Feb 9 18:32:19.775947 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:32:19.931352 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:32:19.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:20.182589 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:32:20.186234 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:32:20.444518 systemd-fsck[1312]: fsck.fat 4.2 (2021-01-31) Feb 9 18:32:20.444518 systemd-fsck[1312]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 18:32:20.446758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:32:20.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:20.454333 systemd[1]: Mounting boot.mount... Feb 9 18:32:20.517997 systemd[1]: Mounted boot.mount. Feb 9 18:32:20.528845 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:32:20.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:20.617853 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:32:20.618469 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:32:20.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:20.885572 systemd-networkd[1243]: eth0: Gained IPv6LL Feb 9 18:32:20.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:20.891739 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:32:20.995042 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:32:20.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:21.001242 systemd[1]: Starting audit-rules.service... Feb 9 18:32:21.006189 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:32:21.011682 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:32:21.017000 audit: BPF prog-id=24 op=LOAD Feb 9 18:32:21.019219 systemd[1]: Starting systemd-resolved.service... Feb 9 18:32:21.024000 audit: BPF prog-id=25 op=LOAD Feb 9 18:32:21.025320 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:32:21.030332 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:32:21.062000 audit[1324]: SYSTEM_BOOT pid=1324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:32:21.065118 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:32:21.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:21.075473 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:32:21.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:21.080663 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:32:21.138998 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:32:21.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:21.143843 systemd[1]: Reached target time-set.target. Feb 9 18:32:21.170010 systemd-resolved[1321]: Positive Trust Anchors: Feb 9 18:32:21.170024 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:32:21.170049 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:32:21.210455 systemd-resolved[1321]: Using system hostname 'ci-3510.3.2-a-de7ead93d8'. Feb 9 18:32:21.211873 systemd[1]: Started systemd-resolved.service. Feb 9 18:32:21.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:21.216580 systemd[1]: Reached target network.target. Feb 9 18:32:21.221083 systemd[1]: Reached target network-online.target. Feb 9 18:32:21.226018 systemd[1]: Reached target nss-lookup.target. Feb 9 18:32:21.250625 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:32:21.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:21.388499 systemd-timesyncd[1323]: Contacted time server 5.78.89.3:123 (0.flatcar.pool.ntp.org). Feb 9 18:32:21.388871 systemd-timesyncd[1323]: Initial clock synchronization to Fri 2024-02-09 18:32:21.389439 UTC. Feb 9 18:32:21.427000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:32:21.427000 audit[1339]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc67a520 a2=420 a3=0 items=0 ppid=1318 pid=1339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:21.427000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:32:21.428290 augenrules[1339]: No rules Feb 9 18:32:21.429022 systemd[1]: Finished audit-rules.service. Feb 9 18:32:27.632806 ldconfig[1303]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:32:27.645540 systemd[1]: Finished ldconfig.service. Feb 9 18:32:27.651317 systemd[1]: Starting systemd-update-done.service... Feb 9 18:32:27.684551 systemd[1]: Finished systemd-update-done.service. Feb 9 18:32:27.689399 systemd[1]: Reached target sysinit.target. Feb 9 18:32:27.693827 systemd[1]: Started motdgen.path. Feb 9 18:32:27.697498 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:32:27.703854 systemd[1]: Started logrotate.timer. Feb 9 18:32:27.707749 systemd[1]: Started mdadm.timer. Feb 9 18:32:27.711255 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:32:27.715976 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:32:27.716005 systemd[1]: Reached target paths.target. Feb 9 18:32:27.719959 systemd[1]: Reached target timers.target. Feb 9 18:32:27.724525 systemd[1]: Listening on dbus.socket. Feb 9 18:32:27.729536 systemd[1]: Starting docker.socket... Feb 9 18:32:27.768749 systemd[1]: Listening on sshd.socket. Feb 9 18:32:27.772853 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:27.773306 systemd[1]: Listening on docker.socket. Feb 9 18:32:27.777406 systemd[1]: Reached target sockets.target. Feb 9 18:32:27.781639 systemd[1]: Reached target basic.target. Feb 9 18:32:27.785748 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:32:27.785774 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:32:27.786855 systemd[1]: Starting containerd.service... Feb 9 18:32:27.791395 systemd[1]: Starting dbus.service... Feb 9 18:32:27.795600 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:32:27.800970 systemd[1]: Starting extend-filesystems.service... Feb 9 18:32:27.807909 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:32:27.808954 systemd[1]: Starting motdgen.service... Feb 9 18:32:27.813272 systemd[1]: Started nvidia.service. Feb 9 18:32:27.818200 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:32:27.823259 systemd[1]: Starting prepare-critools.service... Feb 9 18:32:27.828097 systemd[1]: Starting prepare-helm.service... Feb 9 18:32:27.833055 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:32:27.838211 systemd[1]: Starting sshd-keygen.service... Feb 9 18:32:27.843739 systemd[1]: Starting systemd-logind.service... Feb 9 18:32:27.847658 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:27.847719 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:32:27.848078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:32:27.848665 systemd[1]: Starting update-engine.service... Feb 9 18:32:27.853566 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:32:27.863928 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:32:27.864104 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:32:27.873315 jq[1369]: true Feb 9 18:32:27.873834 jq[1349]: false Feb 9 18:32:27.904732 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:32:27.904885 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:32:27.913189 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:32:27.913615 systemd[1]: Finished motdgen.service. Feb 9 18:32:27.919416 extend-filesystems[1350]: Found sda Feb 9 18:32:27.923238 extend-filesystems[1350]: Found sda1 Feb 9 18:32:27.923238 extend-filesystems[1350]: Found sda2 Feb 9 18:32:27.923238 extend-filesystems[1350]: Found sda3 Feb 9 18:32:27.923238 extend-filesystems[1350]: Found usr Feb 9 18:32:27.923238 extend-filesystems[1350]: Found sda4 Feb 9 18:32:27.923238 extend-filesystems[1350]: Found sda6 Feb 9 18:32:27.923238 extend-filesystems[1350]: Found sda7 Feb 9 18:32:27.923238 extend-filesystems[1350]: Found sda9 Feb 9 18:32:27.923238 extend-filesystems[1350]: Checking size of /dev/sda9 Feb 9 18:32:27.975359 jq[1377]: true Feb 9 18:32:27.950314 systemd-logind[1364]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 9 18:32:27.959904 systemd-logind[1364]: New seat seat0. Feb 9 18:32:27.998295 env[1379]: time="2024-02-09T18:32:27.998233499Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:32:28.027311 env[1379]: time="2024-02-09T18:32:28.025396563Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:32:28.027311 env[1379]: time="2024-02-09T18:32:28.025570011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:28.029958 extend-filesystems[1350]: Old size kept for /dev/sda9 Feb 9 18:32:28.041662 extend-filesystems[1350]: Found sr0 Feb 9 18:32:28.036003 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.048595000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.048630721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.049179626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.049206947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.049221148Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.049231068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.049313952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.050218912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.050365359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:28.050610 env[1379]: time="2024-02-09T18:32:28.050382520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:32:28.036171 systemd[1]: Finished extend-filesystems.service. Feb 9 18:32:28.051084 env[1379]: time="2024-02-09T18:32:28.050932184Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:32:28.051084 env[1379]: time="2024-02-09T18:32:28.050953945Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:32:28.068460 env[1379]: time="2024-02-09T18:32:28.068345803Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:32:28.068460 env[1379]: time="2024-02-09T18:32:28.068391445Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:32:28.068460 env[1379]: time="2024-02-09T18:32:28.068405125Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.068680298Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.068709579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.068734780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.068748661Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069097556Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069117677Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069131918Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069144678Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069158239Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069291885Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069363928Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069611259Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069638501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070466 env[1379]: time="2024-02-09T18:32:28.069652501Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069699583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069715864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069728545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069739545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069752666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069765706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069776787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069788347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069803228Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069918793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069934354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069946274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069957315Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:32:28.070792 env[1379]: time="2024-02-09T18:32:28.069972876Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:32:28.071052 env[1379]: time="2024-02-09T18:32:28.069984036Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:32:28.071052 env[1379]: time="2024-02-09T18:32:28.070000157Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:32:28.071052 env[1379]: time="2024-02-09T18:32:28.070033518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:32:28.071111 env[1379]: time="2024-02-09T18:32:28.070226367Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:32:28.071111 env[1379]: time="2024-02-09T18:32:28.070277369Z" level=info msg="Connect containerd service" Feb 9 18:32:28.071111 env[1379]: time="2024-02-09T18:32:28.070307210Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.071815918Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.071998126Z" level=info msg="Start subscribing containerd event" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.072055729Z" level=info msg="Start recovering state" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.072144893Z" level=info msg="Start event monitor" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.072165254Z" level=info msg="Start snapshots syncer" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.072176454Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.072184654Z" level=info msg="Start streaming server" Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.073258142Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.073327545Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:32:28.087492 env[1379]: time="2024-02-09T18:32:28.082613601Z" level=info msg="containerd successfully booted in 0.085253s" Feb 9 18:32:28.073475 systemd[1]: Started containerd.service. Feb 9 18:32:28.093837 tar[1371]: ./ Feb 9 18:32:28.093837 tar[1371]: ./loopback Feb 9 18:32:28.095360 tar[1373]: linux-arm64/helm Feb 9 18:32:28.095762 tar[1372]: crictl Feb 9 18:32:28.105849 bash[1414]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:32:28.106703 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:32:28.178026 dbus-daemon[1348]: [system] SELinux support is enabled Feb 9 18:32:28.178187 systemd[1]: Started dbus.service. Feb 9 18:32:28.183669 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:32:28.183690 systemd[1]: Reached target system-config.target. Feb 9 18:32:28.192826 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:32:28.192847 systemd[1]: Reached target user-config.target. Feb 9 18:32:28.198399 systemd[1]: Started systemd-logind.service. Feb 9 18:32:28.203121 tar[1371]: ./bandwidth Feb 9 18:32:28.205732 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 18:32:28.206295 dbus-daemon[1348]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 18:32:28.303672 tar[1371]: ./ptp Feb 9 18:32:28.406003 tar[1371]: ./vlan Feb 9 18:32:28.509623 tar[1371]: ./host-device Feb 9 18:32:28.574365 tar[1373]: linux-arm64/LICENSE Feb 9 18:32:28.574622 tar[1373]: linux-arm64/README.md Feb 9 18:32:28.580287 systemd[1]: Finished prepare-helm.service. Feb 9 18:32:28.585117 update_engine[1366]: I0209 18:32:28.565986 1366 main.cc:92] Flatcar Update Engine starting Feb 9 18:32:28.600343 tar[1371]: ./tuning Feb 9 18:32:28.631071 systemd[1]: Started update-engine.service. Feb 9 18:32:28.637532 update_engine[1366]: I0209 18:32:28.631093 1366 update_check_scheduler.cc:74] Next update check in 2m42s Feb 9 18:32:28.639690 systemd[1]: Started locksmithd.service. Feb 9 18:32:28.663886 tar[1371]: ./vrf Feb 9 18:32:28.722574 tar[1371]: ./sbr Feb 9 18:32:28.777124 tar[1371]: ./tap Feb 9 18:32:28.816276 tar[1371]: ./dhcp Feb 9 18:32:28.930197 tar[1371]: ./static Feb 9 18:32:28.958344 systemd[1]: Finished prepare-critools.service. Feb 9 18:32:28.974728 tar[1371]: ./firewall Feb 9 18:32:29.011650 tar[1371]: ./macvlan Feb 9 18:32:29.045169 tar[1371]: ./dummy Feb 9 18:32:29.077988 tar[1371]: ./bridge Feb 9 18:32:29.113735 tar[1371]: ./ipvlan Feb 9 18:32:29.146313 tar[1371]: ./portmap Feb 9 18:32:29.177544 tar[1371]: ./host-local Feb 9 18:32:29.267664 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:32:30.283331 sshd_keygen[1368]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:32:30.299571 systemd[1]: Finished sshd-keygen.service. Feb 9 18:32:30.305346 systemd[1]: Starting issuegen.service... Feb 9 18:32:30.310220 systemd[1]: Started waagent.service. Feb 9 18:32:30.314788 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:32:30.314938 systemd[1]: Finished issuegen.service. Feb 9 18:32:30.320234 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:32:30.375709 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:32:30.382211 systemd[1]: Started getty@tty1.service. Feb 9 18:32:30.387793 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:32:30.392976 systemd[1]: Reached target getty.target. Feb 9 18:32:30.397621 systemd[1]: Reached target multi-user.target. Feb 9 18:32:30.403745 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:32:30.411371 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:32:30.411543 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:32:30.417498 systemd[1]: Startup finished in 717ms (kernel) + 43.983s (initrd) + 22.719s (userspace) = 1min 7.420s. Feb 9 18:32:30.439905 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:32:31.118741 login[1478]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 18:32:31.120144 login[1479]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:32:31.144528 systemd[1]: Created slice user-500.slice. Feb 9 18:32:31.145617 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:32:31.147783 systemd-logind[1364]: New session 1 of user core. Feb 9 18:32:31.179654 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:32:31.181082 systemd[1]: Starting user@500.service... Feb 9 18:32:31.197103 (systemd)[1482]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:32:31.412053 systemd[1482]: Queued start job for default target default.target. Feb 9 18:32:31.412576 systemd[1482]: Reached target paths.target. Feb 9 18:32:31.412596 systemd[1482]: Reached target sockets.target. Feb 9 18:32:31.412607 systemd[1482]: Reached target timers.target. Feb 9 18:32:31.412617 systemd[1482]: Reached target basic.target. Feb 9 18:32:31.412717 systemd[1]: Started user@500.service. Feb 9 18:32:31.413564 systemd[1]: Started session-1.scope. Feb 9 18:32:31.414006 systemd[1482]: Reached target default.target. Feb 9 18:32:31.414167 systemd[1482]: Startup finished in 211ms. Feb 9 18:32:32.119439 login[1478]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:32:32.123750 systemd[1]: Started session-2.scope. Feb 9 18:32:32.124207 systemd-logind[1364]: New session 2 of user core. Feb 9 18:32:38.166658 waagent[1476]: 2024-02-09T18:32:38.166544Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 18:32:38.173512 waagent[1476]: 2024-02-09T18:32:38.173433Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 18:32:38.178628 waagent[1476]: 2024-02-09T18:32:38.178571Z INFO Daemon Daemon Python: 3.9.16 Feb 9 18:32:38.183367 waagent[1476]: 2024-02-09T18:32:38.183195Z INFO Daemon Daemon Run daemon Feb 9 18:32:38.187960 waagent[1476]: 2024-02-09T18:32:38.187904Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 18:32:38.204580 waagent[1476]: 2024-02-09T18:32:38.204458Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:32:38.219447 waagent[1476]: 2024-02-09T18:32:38.219303Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:32:38.229822 waagent[1476]: 2024-02-09T18:32:38.229755Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:32:38.235533 waagent[1476]: 2024-02-09T18:32:38.235466Z INFO Daemon Daemon Using waagent for provisioning Feb 9 18:32:38.241534 waagent[1476]: 2024-02-09T18:32:38.241472Z INFO Daemon Daemon Activate resource disk Feb 9 18:32:38.246351 waagent[1476]: 2024-02-09T18:32:38.246293Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 18:32:38.260566 waagent[1476]: 2024-02-09T18:32:38.260506Z INFO Daemon Daemon Found device: None Feb 9 18:32:38.267890 waagent[1476]: 2024-02-09T18:32:38.267819Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 18:32:38.276732 waagent[1476]: 2024-02-09T18:32:38.276674Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 18:32:38.288809 waagent[1476]: 2024-02-09T18:32:38.288747Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:32:38.294961 waagent[1476]: 2024-02-09T18:32:38.294904Z INFO Daemon Daemon Running default provisioning handler Feb 9 18:32:38.308296 waagent[1476]: 2024-02-09T18:32:38.308173Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:32:38.323783 waagent[1476]: 2024-02-09T18:32:38.323659Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:32:38.335609 waagent[1476]: 2024-02-09T18:32:38.335545Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:32:38.340918 waagent[1476]: 2024-02-09T18:32:38.340859Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 18:32:38.422698 waagent[1476]: 2024-02-09T18:32:38.422509Z INFO Daemon Daemon Successfully mounted dvd Feb 9 18:32:38.499734 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 18:32:38.557075 waagent[1476]: 2024-02-09T18:32:38.556938Z INFO Daemon Daemon Detect protocol endpoint Feb 9 18:32:38.562544 waagent[1476]: 2024-02-09T18:32:38.562470Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:32:38.570065 waagent[1476]: 2024-02-09T18:32:38.570000Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 18:32:38.577548 waagent[1476]: 2024-02-09T18:32:38.577483Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 18:32:38.583165 waagent[1476]: 2024-02-09T18:32:38.583106Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 18:32:38.588547 waagent[1476]: 2024-02-09T18:32:38.588488Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 18:32:38.675198 waagent[1476]: 2024-02-09T18:32:38.675063Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 18:32:38.684368 waagent[1476]: 2024-02-09T18:32:38.684320Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 18:32:38.690017 waagent[1476]: 2024-02-09T18:32:38.689955Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 18:32:39.356491 waagent[1476]: 2024-02-09T18:32:39.356315Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 18:32:39.372338 waagent[1476]: 2024-02-09T18:32:39.372264Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 18:32:39.378720 waagent[1476]: 2024-02-09T18:32:39.378659Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 18:32:39.452032 waagent[1476]: 2024-02-09T18:32:39.451891Z INFO Daemon Daemon Found private key matching thumbprint 70F748330842F76F286700F4F9EB7AF953E3D2C6 Feb 9 18:32:39.460848 waagent[1476]: 2024-02-09T18:32:39.460764Z INFO Daemon Daemon Certificate with thumbprint 997D6ECEA3687ADA4C7BBC0CA251CD5E4AC265EB has no matching private key. Feb 9 18:32:39.470876 waagent[1476]: 2024-02-09T18:32:39.470799Z INFO Daemon Daemon Fetch goal state completed Feb 9 18:32:39.518601 waagent[1476]: 2024-02-09T18:32:39.518542Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 4ada723e-dd63-4577-b591-ba264b7ab7fc New eTag: 9116760014256056716] Feb 9 18:32:39.530491 waagent[1476]: 2024-02-09T18:32:39.530396Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:32:39.546434 waagent[1476]: 2024-02-09T18:32:39.546367Z INFO Daemon Daemon Starting provisioning Feb 9 18:32:39.551798 waagent[1476]: 2024-02-09T18:32:39.551738Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 18:32:39.558993 waagent[1476]: 2024-02-09T18:32:39.558927Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-de7ead93d8] Feb 9 18:32:39.596462 waagent[1476]: 2024-02-09T18:32:39.596313Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-de7ead93d8] Feb 9 18:32:39.603424 waagent[1476]: 2024-02-09T18:32:39.603337Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 18:32:39.610116 waagent[1476]: 2024-02-09T18:32:39.610023Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 18:32:39.626237 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 18:32:39.626396 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 18:32:39.626473 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 18:32:39.626712 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:32:39.633498 systemd-networkd[1243]: eth0: DHCPv6 lease lost Feb 9 18:32:39.634734 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:32:39.634906 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:32:39.636880 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:39.663813 systemd-networkd[1526]: enP45037s1: Link UP Feb 9 18:32:39.663823 systemd-networkd[1526]: enP45037s1: Gained carrier Feb 9 18:32:39.664704 systemd-networkd[1526]: eth0: Link UP Feb 9 18:32:39.664714 systemd-networkd[1526]: eth0: Gained carrier Feb 9 18:32:39.665025 systemd-networkd[1526]: lo: Link UP Feb 9 18:32:39.665034 systemd-networkd[1526]: lo: Gained carrier Feb 9 18:32:39.665259 systemd-networkd[1526]: eth0: Gained IPv6LL Feb 9 18:32:39.665544 systemd-networkd[1526]: Enumeration completed Feb 9 18:32:39.672637 waagent[1476]: 2024-02-09T18:32:39.666603Z INFO Daemon Daemon Create user account if not exists Feb 9 18:32:39.665640 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:39.667976 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:32:39.672948 waagent[1476]: 2024-02-09T18:32:39.672772Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 18:32:39.678685 systemd-networkd[1526]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:39.679460 waagent[1476]: 2024-02-09T18:32:39.679339Z INFO Daemon Daemon Configure sudoer Feb 9 18:32:39.684668 waagent[1476]: 2024-02-09T18:32:39.684594Z INFO Daemon Daemon Configure sshd Feb 9 18:32:39.688937 waagent[1476]: 2024-02-09T18:32:39.688874Z INFO Daemon Daemon Deploy ssh public key. Feb 9 18:32:39.701532 systemd-networkd[1526]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:39.704100 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:32:40.898808 waagent[1476]: 2024-02-09T18:32:40.898740Z INFO Daemon Daemon Provisioning complete Feb 9 18:32:40.926906 waagent[1476]: 2024-02-09T18:32:40.926830Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 18:32:40.933452 waagent[1476]: 2024-02-09T18:32:40.933370Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 18:32:40.944307 waagent[1476]: 2024-02-09T18:32:40.944236Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 18:32:41.241772 waagent[1535]: 2024-02-09T18:32:41.241631Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 18:32:41.242825 waagent[1535]: 2024-02-09T18:32:41.242772Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:41.243057 waagent[1535]: 2024-02-09T18:32:41.243009Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:41.255389 waagent[1535]: 2024-02-09T18:32:41.255325Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 18:32:41.255682 waagent[1535]: 2024-02-09T18:32:41.255632Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 18:32:41.321917 waagent[1535]: 2024-02-09T18:32:41.321733Z INFO ExtHandler ExtHandler Found private key matching thumbprint 70F748330842F76F286700F4F9EB7AF953E3D2C6 Feb 9 18:32:41.322354 waagent[1535]: 2024-02-09T18:32:41.322301Z INFO ExtHandler ExtHandler Certificate with thumbprint 997D6ECEA3687ADA4C7BBC0CA251CD5E4AC265EB has no matching private key. Feb 9 18:32:41.322777 waagent[1535]: 2024-02-09T18:32:41.322723Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 18:32:41.340549 waagent[1535]: 2024-02-09T18:32:41.340495Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 0719a599-35a0-40d5-be92-11710c9387aa New eTag: 9116760014256056716] Feb 9 18:32:41.341271 waagent[1535]: 2024-02-09T18:32:41.341215Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:32:41.424695 waagent[1535]: 2024-02-09T18:32:41.424558Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:32:41.435103 waagent[1535]: 2024-02-09T18:32:41.435037Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1535 Feb 9 18:32:41.438911 waagent[1535]: 2024-02-09T18:32:41.438853Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:32:41.440358 waagent[1535]: 2024-02-09T18:32:41.440301Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:32:41.555145 waagent[1535]: 2024-02-09T18:32:41.555025Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:32:41.555860 waagent[1535]: 2024-02-09T18:32:41.555804Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:32:41.563407 waagent[1535]: 2024-02-09T18:32:41.563331Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:32:41.563932 waagent[1535]: 2024-02-09T18:32:41.563873Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:32:41.565102 waagent[1535]: 2024-02-09T18:32:41.565034Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 18:32:41.566411 waagent[1535]: 2024-02-09T18:32:41.566339Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:32:41.567058 waagent[1535]: 2024-02-09T18:32:41.566998Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:41.567317 waagent[1535]: 2024-02-09T18:32:41.567267Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:41.567996 waagent[1535]: 2024-02-09T18:32:41.567937Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:32:41.568380 waagent[1535]: 2024-02-09T18:32:41.568326Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:32:41.568380 waagent[1535]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:32:41.568380 waagent[1535]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:32:41.568380 waagent[1535]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:32:41.568380 waagent[1535]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:41.568380 waagent[1535]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:41.568380 waagent[1535]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:41.570928 waagent[1535]: 2024-02-09T18:32:41.570788Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:41.571209 waagent[1535]: 2024-02-09T18:32:41.571150Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:41.571799 waagent[1535]: 2024-02-09T18:32:41.571736Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:32:41.571944 waagent[1535]: 2024-02-09T18:32:41.571898Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:32:41.572058 waagent[1535]: 2024-02-09T18:32:41.572016Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:32:41.572585 waagent[1535]: 2024-02-09T18:32:41.572517Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:32:41.573076 waagent[1535]: 2024-02-09T18:32:41.573006Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:32:41.573167 waagent[1535]: 2024-02-09T18:32:41.573102Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:32:41.574193 waagent[1535]: 2024-02-09T18:32:41.574114Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:32:41.574519 waagent[1535]: 2024-02-09T18:32:41.574455Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:32:41.575049 waagent[1535]: 2024-02-09T18:32:41.574978Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:32:41.584617 waagent[1535]: 2024-02-09T18:32:41.584546Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 18:32:41.586860 waagent[1535]: 2024-02-09T18:32:41.586797Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:32:41.588718 waagent[1535]: 2024-02-09T18:32:41.588654Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 18:32:41.613141 waagent[1535]: 2024-02-09T18:32:41.612997Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1526' Feb 9 18:32:41.629470 waagent[1535]: 2024-02-09T18:32:41.629377Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 18:32:41.696282 waagent[1535]: 2024-02-09T18:32:41.696156Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:32:41.696282 waagent[1535]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:32:41.696282 waagent[1535]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:32:41.696282 waagent[1535]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:5d:7f brd ff:ff:ff:ff:ff:ff Feb 9 18:32:41.696282 waagent[1535]: 3: enP45037s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:5d:7f brd ff:ff:ff:ff:ff:ff\ altname enP45037p0s2 Feb 9 18:32:41.696282 waagent[1535]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:32:41.696282 waagent[1535]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:32:41.696282 waagent[1535]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:32:41.696282 waagent[1535]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:32:41.696282 waagent[1535]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:32:41.696282 waagent[1535]: 2: eth0 inet6 fe80::222:48ff:fe7b:5d7f/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:32:41.811053 waagent[1535]: 2024-02-09T18:32:41.810946Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 18:32:41.946748 waagent[1476]: 2024-02-09T18:32:41.946617Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 18:32:41.950847 waagent[1476]: 2024-02-09T18:32:41.950786Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 18:32:43.065597 waagent[1567]: 2024-02-09T18:32:43.065494Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 18:32:43.066612 waagent[1567]: 2024-02-09T18:32:43.066557Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 18:32:43.066844 waagent[1567]: 2024-02-09T18:32:43.066797Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 18:32:43.074969 waagent[1567]: 2024-02-09T18:32:43.074878Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:32:43.075481 waagent[1567]: 2024-02-09T18:32:43.075398Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:43.075741 waagent[1567]: 2024-02-09T18:32:43.075693Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:43.088157 waagent[1567]: 2024-02-09T18:32:43.088094Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 18:32:43.096450 waagent[1567]: 2024-02-09T18:32:43.096377Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 18:32:43.097550 waagent[1567]: 2024-02-09T18:32:43.097493Z INFO ExtHandler Feb 9 18:32:43.097789 waagent[1567]: 2024-02-09T18:32:43.097742Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 38c0bf2e-052d-41b2-b8d0-d67c1bf65d5e eTag: 9116760014256056716 source: Fabric] Feb 9 18:32:43.098625 waagent[1567]: 2024-02-09T18:32:43.098569Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 18:32:43.099954 waagent[1567]: 2024-02-09T18:32:43.099896Z INFO ExtHandler Feb 9 18:32:43.100178 waagent[1567]: 2024-02-09T18:32:43.100131Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 18:32:43.106233 waagent[1567]: 2024-02-09T18:32:43.106188Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 18:32:43.106857 waagent[1567]: 2024-02-09T18:32:43.106812Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:32:43.130037 waagent[1567]: 2024-02-09T18:32:43.129977Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 18:32:43.198722 waagent[1567]: 2024-02-09T18:32:43.198589Z INFO ExtHandler Downloaded certificate {'thumbprint': '997D6ECEA3687ADA4C7BBC0CA251CD5E4AC265EB', 'hasPrivateKey': False} Feb 9 18:32:43.199959 waagent[1567]: 2024-02-09T18:32:43.199900Z INFO ExtHandler Downloaded certificate {'thumbprint': '70F748330842F76F286700F4F9EB7AF953E3D2C6', 'hasPrivateKey': True} Feb 9 18:32:43.201130 waagent[1567]: 2024-02-09T18:32:43.201071Z INFO ExtHandler Fetch goal state completed Feb 9 18:32:43.226164 waagent[1567]: 2024-02-09T18:32:43.226095Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1567 Feb 9 18:32:43.229736 waagent[1567]: 2024-02-09T18:32:43.229678Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:32:43.231289 waagent[1567]: 2024-02-09T18:32:43.231233Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:32:43.237217 waagent[1567]: 2024-02-09T18:32:43.237156Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:32:43.237910 waagent[1567]: 2024-02-09T18:32:43.237852Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:32:43.245591 waagent[1567]: 2024-02-09T18:32:43.245537Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:32:43.246167 waagent[1567]: 2024-02-09T18:32:43.246114Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:32:43.252113 waagent[1567]: 2024-02-09T18:32:43.252018Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 18:32:43.255764 waagent[1567]: 2024-02-09T18:32:43.255709Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 18:32:43.257313 waagent[1567]: 2024-02-09T18:32:43.257246Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:32:43.257690 waagent[1567]: 2024-02-09T18:32:43.257613Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:43.258397 waagent[1567]: 2024-02-09T18:32:43.258328Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:32:43.258538 waagent[1567]: 2024-02-09T18:32:43.258466Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:43.259611 waagent[1567]: 2024-02-09T18:32:43.259519Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:32:43.260072 waagent[1567]: 2024-02-09T18:32:43.259904Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:32:43.260248 waagent[1567]: 2024-02-09T18:32:43.260142Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:32:43.260524 waagent[1567]: 2024-02-09T18:32:43.260456Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:43.260770 waagent[1567]: 2024-02-09T18:32:43.260703Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:32:43.260770 waagent[1567]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:32:43.260770 waagent[1567]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:32:43.260770 waagent[1567]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:32:43.260770 waagent[1567]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:43.260770 waagent[1567]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:43.260770 waagent[1567]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:43.261376 waagent[1567]: 2024-02-09T18:32:43.261307Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:43.263756 waagent[1567]: 2024-02-09T18:32:43.263649Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:32:43.264594 waagent[1567]: 2024-02-09T18:32:43.264532Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:32:43.264924 waagent[1567]: 2024-02-09T18:32:43.264861Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:32:43.265169 waagent[1567]: 2024-02-09T18:32:43.265112Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:32:43.265280 waagent[1567]: 2024-02-09T18:32:43.265222Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:32:43.265795 waagent[1567]: 2024-02-09T18:32:43.265730Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:32:43.293240 waagent[1567]: 2024-02-09T18:32:43.292932Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 18:32:43.293407 waagent[1567]: 2024-02-09T18:32:43.293332Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:32:43.293407 waagent[1567]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:32:43.293407 waagent[1567]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:32:43.293407 waagent[1567]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:5d:7f brd ff:ff:ff:ff:ff:ff Feb 9 18:32:43.293407 waagent[1567]: 3: enP45037s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:5d:7f brd ff:ff:ff:ff:ff:ff\ altname enP45037p0s2 Feb 9 18:32:43.293407 waagent[1567]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:32:43.293407 waagent[1567]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:32:43.293407 waagent[1567]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:32:43.293407 waagent[1567]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:32:43.293407 waagent[1567]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:32:43.293407 waagent[1567]: 2: eth0 inet6 fe80::222:48ff:fe7b:5d7f/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:32:43.295340 waagent[1567]: 2024-02-09T18:32:43.295261Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 18:32:43.333177 waagent[1567]: 2024-02-09T18:32:43.333076Z INFO ExtHandler ExtHandler Feb 9 18:32:43.333487 waagent[1567]: 2024-02-09T18:32:43.333418Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 54133b4c-b078-4375-9efd-3f9d354e93e6 correlation 621be778-ef2d-4125-9668-87f4afae87a9 created: 2024-02-09T18:30:40.645157Z] Feb 9 18:32:43.334458 waagent[1567]: 2024-02-09T18:32:43.334376Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 18:32:43.336280 waagent[1567]: 2024-02-09T18:32:43.336215Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 18:32:43.359837 waagent[1567]: 2024-02-09T18:32:43.359770Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 18:32:43.381009 waagent[1567]: 2024-02-09T18:32:43.380933Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 601D3876-97F7-47B0-A3CD-C9BE3ED95F6D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 18:32:43.533107 waagent[1567]: 2024-02-09T18:32:43.532989Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 18:32:43.533107 waagent[1567]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:43.533107 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 18:32:43.533107 waagent[1567]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:43.533107 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 18:32:43.533107 waagent[1567]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:43.533107 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 18:32:43.533107 waagent[1567]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:32:43.533107 waagent[1567]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:32:43.533107 waagent[1567]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:32:43.541274 waagent[1567]: 2024-02-09T18:32:43.541172Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 18:32:43.541274 waagent[1567]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:43.541274 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 18:32:43.541274 waagent[1567]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:43.541274 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 18:32:43.541274 waagent[1567]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:43.541274 waagent[1567]: pkts bytes target prot opt in out source destination Feb 9 18:32:43.541274 waagent[1567]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:32:43.541274 waagent[1567]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:32:43.541274 waagent[1567]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:32:43.542118 waagent[1567]: 2024-02-09T18:32:43.542071Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 18:33:07.209191 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 18:33:14.033137 update_engine[1366]: I0209 18:33:14.032783 1366 update_attempter.cc:509] Updating boot flags... Feb 9 18:33:17.823788 systemd[1]: Created slice system-sshd.slice. Feb 9 18:33:17.824870 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.12.6:58108.service. Feb 9 18:33:18.445272 sshd[1681]: Accepted publickey for core from 10.200.12.6 port 58108 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:18.461374 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:18.465158 systemd-logind[1364]: New session 3 of user core. Feb 9 18:33:18.465922 systemd[1]: Started session-3.scope. Feb 9 18:33:18.784206 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.12.6:58110.service. Feb 9 18:33:19.164072 sshd[1686]: Accepted publickey for core from 10.200.12.6 port 58110 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:19.165290 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:19.169348 systemd[1]: Started session-4.scope. Feb 9 18:33:19.169656 systemd-logind[1364]: New session 4 of user core. Feb 9 18:33:19.444357 sshd[1686]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:19.446786 systemd[1]: sshd@1-10.200.20.37:22-10.200.12.6:58110.service: Deactivated successfully. Feb 9 18:33:19.447477 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:33:19.448002 systemd-logind[1364]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:33:19.448815 systemd-logind[1364]: Removed session 4. Feb 9 18:33:19.511300 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.12.6:58126.service. Feb 9 18:33:19.891591 sshd[1695]: Accepted publickey for core from 10.200.12.6 port 58126 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:19.893216 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:19.896406 systemd-logind[1364]: New session 5 of user core. Feb 9 18:33:19.897222 systemd[1]: Started session-5.scope. Feb 9 18:33:20.167267 sshd[1695]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:20.169664 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:33:20.170325 systemd-logind[1364]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:33:20.170485 systemd[1]: sshd@2-10.200.20.37:22-10.200.12.6:58126.service: Deactivated successfully. Feb 9 18:33:20.171389 systemd-logind[1364]: Removed session 5. Feb 9 18:33:20.236173 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.12.6:58136.service. Feb 9 18:33:20.649454 sshd[1701]: Accepted publickey for core from 10.200.12.6 port 58136 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:20.650957 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:20.654917 systemd[1]: Started session-6.scope. Feb 9 18:33:20.655489 systemd-logind[1364]: New session 6 of user core. Feb 9 18:33:20.951533 sshd[1701]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:20.953918 systemd[1]: sshd@3-10.200.20.37:22-10.200.12.6:58136.service: Deactivated successfully. Feb 9 18:33:20.954584 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:33:20.955073 systemd-logind[1364]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:33:20.955973 systemd-logind[1364]: Removed session 6. Feb 9 18:33:21.022120 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.12.6:58142.service. Feb 9 18:33:21.443942 sshd[1707]: Accepted publickey for core from 10.200.12.6 port 58142 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:21.445153 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:21.449271 systemd[1]: Started session-7.scope. Feb 9 18:33:21.449605 systemd-logind[1364]: New session 7 of user core. Feb 9 18:33:21.940101 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:33:21.940297 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:33:22.637673 systemd[1]: Starting docker.service... Feb 9 18:33:22.686954 env[1725]: time="2024-02-09T18:33:22.686899520Z" level=info msg="Starting up" Feb 9 18:33:22.697494 env[1725]: time="2024-02-09T18:33:22.697469535Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:33:22.697601 env[1725]: time="2024-02-09T18:33:22.697587535Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:33:22.697675 env[1725]: time="2024-02-09T18:33:22.697660215Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:33:22.697732 env[1725]: time="2024-02-09T18:33:22.697719815Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:33:22.699450 env[1725]: time="2024-02-09T18:33:22.699407297Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:33:22.699541 env[1725]: time="2024-02-09T18:33:22.699527257Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:33:22.699612 env[1725]: time="2024-02-09T18:33:22.699598177Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:33:22.699669 env[1725]: time="2024-02-09T18:33:22.699657618Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:33:22.755542 env[1725]: time="2024-02-09T18:33:22.755511734Z" level=info msg="Loading containers: start." Feb 9 18:33:22.904453 kernel: Initializing XFRM netlink socket Feb 9 18:33:22.928324 env[1725]: time="2024-02-09T18:33:22.928282251Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:33:23.102536 systemd-networkd[1526]: docker0: Link UP Feb 9 18:33:23.123898 env[1725]: time="2024-02-09T18:33:23.123861468Z" level=info msg="Loading containers: done." Feb 9 18:33:23.132070 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2095958171-merged.mount: Deactivated successfully. Feb 9 18:33:23.145858 env[1725]: time="2024-02-09T18:33:23.145819097Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:33:23.146217 env[1725]: time="2024-02-09T18:33:23.146199857Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:33:23.146408 env[1725]: time="2024-02-09T18:33:23.146394257Z" level=info msg="Daemon has completed initialization" Feb 9 18:33:23.175540 systemd[1]: Started docker.service. Feb 9 18:33:23.181540 env[1725]: time="2024-02-09T18:33:23.181156742Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:33:23.197692 systemd[1]: Reloading. Feb 9 18:33:23.261464 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2024-02-09T18:33:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:33:23.261495 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2024-02-09T18:33:23Z" level=info msg="torcx already run" Feb 9 18:33:23.326630 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:33:23.326646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:33:23.343456 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:33:23.430279 systemd[1]: Started kubelet.service. Feb 9 18:33:23.495144 kubelet[1913]: E0209 18:33:23.495000 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 18:33:23.497130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:33:23.497254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:33:27.565698 env[1379]: time="2024-02-09T18:33:27.565630178Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 18:33:28.489391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78825503.mount: Deactivated successfully. Feb 9 18:33:30.887444 env[1379]: time="2024-02-09T18:33:30.887388508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:30.893031 env[1379]: time="2024-02-09T18:33:30.892983719Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:30.896639 env[1379]: time="2024-02-09T18:33:30.896608256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:30.899484 env[1379]: time="2024-02-09T18:33:30.899449283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:30.900206 env[1379]: time="2024-02-09T18:33:30.900177870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:68142d88471bf00b1317307442bd31edbbc7532061d623e85659df2d417308fb\"" Feb 9 18:33:30.909339 env[1379]: time="2024-02-09T18:33:30.909300814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 18:33:33.625666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:33:33.625848 systemd[1]: Stopped kubelet.service. Feb 9 18:33:33.627219 systemd[1]: Started kubelet.service. Feb 9 18:33:33.678141 kubelet[1936]: E0209 18:33:33.678089 1936 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 18:33:33.680616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:33:33.680742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:33:33.766743 env[1379]: time="2024-02-09T18:33:33.766689598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.773346 env[1379]: time="2024-02-09T18:33:33.773309788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.777961 env[1379]: time="2024-02-09T18:33:33.777931508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.784480 env[1379]: time="2024-02-09T18:33:33.784448774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.785143 env[1379]: time="2024-02-09T18:33:33.785118077Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:8dbd4fd1241644100b94eb40a9d284c5cf08fa7f2d15cafdf1ca8cec8443b31f\"" Feb 9 18:33:33.793149 env[1379]: time="2024-02-09T18:33:33.793125435Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 18:33:35.617497 env[1379]: time="2024-02-09T18:33:35.617453751Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.623094 env[1379]: time="2024-02-09T18:33:35.623064655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.626811 env[1379]: time="2024-02-09T18:33:35.626776217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.629833 env[1379]: time="2024-02-09T18:33:35.629799436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.630553 env[1379]: time="2024-02-09T18:33:35.630527500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:541cddf10a6c9bb71f141eeefea4203714984b67ec3582fb4538058af9e43663\"" Feb 9 18:33:35.639476 env[1379]: time="2024-02-09T18:33:35.639439232Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 18:33:36.760361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430339638.mount: Deactivated successfully. Feb 9 18:33:37.689787 env[1379]: time="2024-02-09T18:33:37.689742237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.699644 env[1379]: time="2024-02-09T18:33:37.699600944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.704174 env[1379]: time="2024-02-09T18:33:37.704144685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.708885 env[1379]: time="2024-02-09T18:33:37.708855872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.709353 env[1379]: time="2024-02-09T18:33:37.709320246Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 18:33:37.717779 env[1379]: time="2024-02-09T18:33:37.717739628Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:33:38.375108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233359763.mount: Deactivated successfully. Feb 9 18:33:38.407370 env[1379]: time="2024-02-09T18:33:38.407303781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:38.414517 env[1379]: time="2024-02-09T18:33:38.414478598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:38.418767 env[1379]: time="2024-02-09T18:33:38.418741567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:38.426250 env[1379]: time="2024-02-09T18:33:38.426213153Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:38.426841 env[1379]: time="2024-02-09T18:33:38.426809291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:33:38.435700 env[1379]: time="2024-02-09T18:33:38.435670199Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 18:33:39.183451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713492899.mount: Deactivated successfully. Feb 9 18:33:43.875666 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 18:33:43.875840 systemd[1]: Stopped kubelet.service. Feb 9 18:33:43.877212 systemd[1]: Started kubelet.service. Feb 9 18:33:43.927516 kubelet[1963]: E0209 18:33:43.927463 1963 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 18:33:43.929359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:33:43.929500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:33:44.895831 env[1379]: time="2024-02-09T18:33:44.895773085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:44.902955 env[1379]: time="2024-02-09T18:33:44.902919429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:44.908192 env[1379]: time="2024-02-09T18:33:44.908165365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:44.912623 env[1379]: time="2024-02-09T18:33:44.912582439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:44.914108 env[1379]: time="2024-02-09T18:33:44.914050597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace\"" Feb 9 18:33:44.927171 env[1379]: time="2024-02-09T18:33:44.927141695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 18:33:45.732313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767338052.mount: Deactivated successfully. Feb 9 18:33:46.349349 env[1379]: time="2024-02-09T18:33:46.349306215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:46.359865 env[1379]: time="2024-02-09T18:33:46.359826873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:46.364646 env[1379]: time="2024-02-09T18:33:46.364618030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:46.370457 env[1379]: time="2024-02-09T18:33:46.370418853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:46.371111 env[1379]: time="2024-02-09T18:33:46.371086109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Feb 9 18:33:51.198710 systemd[1]: Stopped kubelet.service. Feb 9 18:33:51.213876 systemd[1]: Reloading. Feb 9 18:33:51.292419 /usr/lib/systemd/system-generators/torcx-generator[2056]: time="2024-02-09T18:33:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:33:51.292459 /usr/lib/systemd/system-generators/torcx-generator[2056]: time="2024-02-09T18:33:51Z" level=info msg="torcx already run" Feb 9 18:33:51.369738 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:33:51.369944 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:33:51.387129 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:33:51.478342 systemd[1]: Started kubelet.service. Feb 9 18:33:51.525524 kubelet[2116]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:33:51.525524 kubelet[2116]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:33:51.525524 kubelet[2116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:33:51.525872 kubelet[2116]: I0209 18:33:51.525585 2116 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:33:52.124627 kubelet[2116]: I0209 18:33:52.124597 2116 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 18:33:52.124784 kubelet[2116]: I0209 18:33:52.124774 2116 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:33:52.125036 kubelet[2116]: I0209 18:33:52.125023 2116 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 18:33:52.128980 kubelet[2116]: E0209 18:33:52.128943 2116 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.129080 kubelet[2116]: I0209 18:33:52.129005 2116 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:33:52.132961 kubelet[2116]: W0209 18:33:52.132944 2116 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:33:52.133544 kubelet[2116]: I0209 18:33:52.133530 2116 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:33:52.133826 kubelet[2116]: I0209 18:33:52.133815 2116 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:33:52.134035 kubelet[2116]: I0209 18:33:52.134022 2116 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 18:33:52.134163 kubelet[2116]: I0209 18:33:52.134152 2116 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 18:33:52.134221 kubelet[2116]: I0209 18:33:52.134212 2116 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 18:33:52.134355 kubelet[2116]: I0209 18:33:52.134344 2116 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:33:52.134527 kubelet[2116]: I0209 18:33:52.134516 2116 kubelet.go:393] "Attempting to sync node with API server" Feb 9 18:33:52.134609 kubelet[2116]: I0209 18:33:52.134599 2116 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:33:52.134673 kubelet[2116]: I0209 18:33:52.134664 2116 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:33:52.134728 kubelet[2116]: I0209 18:33:52.134719 2116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:33:52.136568 kubelet[2116]: I0209 18:33:52.136549 2116 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:33:52.136880 kubelet[2116]: W0209 18:33:52.136865 2116 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:33:52.137349 kubelet[2116]: I0209 18:33:52.137329 2116 server.go:1232] "Started kubelet" Feb 9 18:33:52.137585 kubelet[2116]: W0209 18:33:52.137551 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.137679 kubelet[2116]: E0209 18:33:52.137668 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.137810 kubelet[2116]: W0209 18:33:52.137786 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-de7ead93d8&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.137880 kubelet[2116]: E0209 18:33:52.137869 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-de7ead93d8&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.138780 kubelet[2116]: I0209 18:33:52.138754 2116 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:33:52.139596 kubelet[2116]: E0209 18:33:52.139500 2116 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-de7ead93d8.17b24581b63bce5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-de7ead93d8", UID:"ci-3510.3.2-a-de7ead93d8", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-de7ead93d8"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 33, 52, 137309786, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 33, 52, 137309786, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-de7ead93d8"}': 'Post "https://10.200.20.37:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.37:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:33:52.139846 kubelet[2116]: I0209 18:33:52.139832 2116 server.go:462] "Adding debug handlers to kubelet server" Feb 9 18:33:52.140061 kubelet[2116]: E0209 18:33:52.140039 2116 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:33:52.140061 kubelet[2116]: E0209 18:33:52.140063 2116 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:33:52.140593 kubelet[2116]: I0209 18:33:52.140564 2116 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:33:52.140815 kubelet[2116]: I0209 18:33:52.140775 2116 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 18:33:52.148081 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:33:52.148198 kubelet[2116]: I0209 18:33:52.148176 2116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:33:52.149796 kubelet[2116]: I0209 18:33:52.149771 2116 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 18:33:52.151272 kubelet[2116]: I0209 18:33:52.151248 2116 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:33:52.151462 kubelet[2116]: I0209 18:33:52.151423 2116 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 18:33:52.153833 kubelet[2116]: W0209 18:33:52.153773 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.153833 kubelet[2116]: E0209 18:33:52.153836 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.154017 kubelet[2116]: E0209 18:33:52.153953 2116 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-de7ead93d8\" not found" Feb 9 18:33:52.154276 kubelet[2116]: E0209 18:33:52.154244 2116 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-de7ead93d8?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Feb 9 18:33:52.183471 kubelet[2116]: I0209 18:33:52.183412 2116 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 18:33:52.184316 kubelet[2116]: I0209 18:33:52.184278 2116 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 18:33:52.184414 kubelet[2116]: I0209 18:33:52.184404 2116 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 18:33:52.185752 kubelet[2116]: I0209 18:33:52.185736 2116 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 18:33:52.185903 kubelet[2116]: E0209 18:33:52.185892 2116 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:33:52.186817 kubelet[2116]: W0209 18:33:52.186792 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.186938 kubelet[2116]: E0209 18:33:52.186928 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:52.249568 kubelet[2116]: I0209 18:33:52.249531 2116 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:33:52.249568 kubelet[2116]: I0209 18:33:52.249553 2116 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:33:52.249568 kubelet[2116]: I0209 18:33:52.249570 2116 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:33:52.254477 kubelet[2116]: I0209 18:33:52.254445 2116 policy_none.go:49] "None policy: Start" Feb 9 18:33:52.255395 kubelet[2116]: I0209 18:33:52.255366 2116 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:33:52.255489 kubelet[2116]: I0209 18:33:52.255413 2116 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:33:52.256424 kubelet[2116]: I0209 18:33:52.256392 2116 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.256920 kubelet[2116]: E0209 18:33:52.256893 2116 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.262698 systemd[1]: Created slice kubepods.slice. Feb 9 18:33:52.266593 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:33:52.269035 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:33:52.278191 kubelet[2116]: I0209 18:33:52.278169 2116 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:33:52.278377 kubelet[2116]: I0209 18:33:52.278358 2116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:33:52.279547 kubelet[2116]: E0209 18:33:52.279530 2116 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-de7ead93d8\" not found" Feb 9 18:33:52.287285 kubelet[2116]: I0209 18:33:52.287265 2116 topology_manager.go:215] "Topology Admit Handler" podUID="19581d2a014b16e02647601c2f1583e0" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.288726 kubelet[2116]: I0209 18:33:52.288701 2116 topology_manager.go:215] "Topology Admit Handler" podUID="1e7d23714bba59664f2691dcc93e4dd1" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.289940 kubelet[2116]: I0209 18:33:52.289916 2116 topology_manager.go:215] "Topology Admit Handler" podUID="0d8707c14da3f3bf46e8c2cdf9b18dfa" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.294923 systemd[1]: Created slice kubepods-burstable-pod19581d2a014b16e02647601c2f1583e0.slice. Feb 9 18:33:52.307927 systemd[1]: Created slice kubepods-burstable-pod0d8707c14da3f3bf46e8c2cdf9b18dfa.slice. Feb 9 18:33:52.323079 systemd[1]: Created slice kubepods-burstable-pod1e7d23714bba59664f2691dcc93e4dd1.slice. Feb 9 18:33:52.355271 kubelet[2116]: E0209 18:33:52.355243 2116 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-de7ead93d8?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Feb 9 18:33:52.452691 kubelet[2116]: I0209 18:33:52.452660 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.452816 kubelet[2116]: I0209 18:33:52.452732 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19581d2a014b16e02647601c2f1583e0-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-de7ead93d8\" (UID: \"19581d2a014b16e02647601c2f1583e0\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.452816 kubelet[2116]: I0209 18:33:52.452797 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19581d2a014b16e02647601c2f1583e0-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-de7ead93d8\" (UID: \"19581d2a014b16e02647601c2f1583e0\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.452869 kubelet[2116]: I0209 18:33:52.452820 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.452894 kubelet[2116]: I0209 18:33:52.452876 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.452924 kubelet[2116]: I0209 18:33:52.452898 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d8707c14da3f3bf46e8c2cdf9b18dfa-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-de7ead93d8\" (UID: \"0d8707c14da3f3bf46e8c2cdf9b18dfa\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.452972 kubelet[2116]: I0209 18:33:52.452954 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19581d2a014b16e02647601c2f1583e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-de7ead93d8\" (UID: \"19581d2a014b16e02647601c2f1583e0\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.453033 kubelet[2116]: I0209 18:33:52.453019 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.453063 kubelet[2116]: I0209 18:33:52.453045 2116 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.459016 kubelet[2116]: I0209 18:33:52.458997 2116 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.459469 kubelet[2116]: E0209 18:33:52.459452 2116 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.607935 env[1379]: time="2024-02-09T18:33:52.607888272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-de7ead93d8,Uid:19581d2a014b16e02647601c2f1583e0,Namespace:kube-system,Attempt:0,}" Feb 9 18:33:52.610906 env[1379]: time="2024-02-09T18:33:52.610862494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-de7ead93d8,Uid:0d8707c14da3f3bf46e8c2cdf9b18dfa,Namespace:kube-system,Attempt:0,}" Feb 9 18:33:52.625633 env[1379]: time="2024-02-09T18:33:52.625594965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-de7ead93d8,Uid:1e7d23714bba59664f2691dcc93e4dd1,Namespace:kube-system,Attempt:0,}" Feb 9 18:33:52.755859 kubelet[2116]: E0209 18:33:52.755758 2116 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-de7ead93d8?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Feb 9 18:33:52.861797 kubelet[2116]: I0209 18:33:52.861478 2116 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:52.861797 kubelet[2116]: E0209 18:33:52.861771 2116 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:53.220164 kubelet[2116]: W0209 18:33:53.219567 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.220164 kubelet[2116]: E0209 18:33:53.219614 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.219751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218666041.mount: Deactivated successfully. Feb 9 18:33:53.316724 env[1379]: time="2024-02-09T18:33:53.316681899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.389823 kubelet[2116]: W0209 18:33:53.389747 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-de7ead93d8&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.389823 kubelet[2116]: E0209 18:33:53.389818 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-de7ead93d8&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.425496 kubelet[2116]: W0209 18:33:53.425398 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.425606 kubelet[2116]: E0209 18:33:53.425506 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.518131 env[1379]: time="2024-02-09T18:33:53.518029522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.521436 env[1379]: time="2024-02-09T18:33:53.521391432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.556560 kubelet[2116]: E0209 18:33:53.556528 2116 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-de7ead93d8?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Feb 9 18:33:53.581189 kubelet[2116]: W0209 18:33:53.581159 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.581293 kubelet[2116]: E0209 18:33:53.581201 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:53.615185 env[1379]: time="2024-02-09T18:33:53.615147281Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.664125 kubelet[2116]: I0209 18:33:53.664088 2116 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:53.664437 kubelet[2116]: E0209 18:33:53.664406 2116 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:53.667312 env[1379]: time="2024-02-09T18:33:53.667270314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.710130 env[1379]: time="2024-02-09T18:33:53.710091995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.766835 env[1379]: time="2024-02-09T18:33:53.766795242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.818250 env[1379]: time="2024-02-09T18:33:53.818155779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.821097 env[1379]: time="2024-02-09T18:33:53.821065119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.824728 env[1379]: time="2024-02-09T18:33:53.824690673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.870528 env[1379]: time="2024-02-09T18:33:53.870484296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:53.961349 env[1379]: time="2024-02-09T18:33:53.961300844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:54.198095 kubelet[2116]: E0209 18:33:54.198064 2116 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:54.270819 env[1379]: time="2024-02-09T18:33:54.270757680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:33:54.270978 env[1379]: time="2024-02-09T18:33:54.270956764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:33:54.271094 env[1379]: time="2024-02-09T18:33:54.271072807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:33:54.271481 env[1379]: time="2024-02-09T18:33:54.271393213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aeee52136d0288563ac794b8f398131dce5e3357a463108139b09162ed9a619f pid=2155 runtime=io.containerd.runc.v2 Feb 9 18:33:54.285758 systemd[1]: Started cri-containerd-aeee52136d0288563ac794b8f398131dce5e3357a463108139b09162ed9a619f.scope. Feb 9 18:33:54.288562 systemd[1]: run-containerd-runc-k8s.io-aeee52136d0288563ac794b8f398131dce5e3357a463108139b09162ed9a619f-runc.PgtwJi.mount: Deactivated successfully. Feb 9 18:33:54.320525 env[1379]: time="2024-02-09T18:33:54.320473319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-de7ead93d8,Uid:19581d2a014b16e02647601c2f1583e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeee52136d0288563ac794b8f398131dce5e3357a463108139b09162ed9a619f\"" Feb 9 18:33:54.325789 env[1379]: time="2024-02-09T18:33:54.325744425Z" level=info msg="CreateContainer within sandbox \"aeee52136d0288563ac794b8f398131dce5e3357a463108139b09162ed9a619f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:33:54.524554 env[1379]: time="2024-02-09T18:33:54.524152649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:33:54.524554 env[1379]: time="2024-02-09T18:33:54.524192370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:33:54.524718 env[1379]: time="2024-02-09T18:33:54.524202810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:33:54.524718 env[1379]: time="2024-02-09T18:33:54.524363853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0b1b0b791e3d1a364f543ff2544f2e1277176b4e08839ec7f8899f4bdddf144 pid=2198 runtime=io.containerd.runc.v2 Feb 9 18:33:54.535236 systemd[1]: Started cri-containerd-e0b1b0b791e3d1a364f543ff2544f2e1277176b4e08839ec7f8899f4bdddf144.scope. Feb 9 18:33:54.563571 env[1379]: time="2024-02-09T18:33:54.563518880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-de7ead93d8,Uid:1e7d23714bba59664f2691dcc93e4dd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0b1b0b791e3d1a364f543ff2544f2e1277176b4e08839ec7f8899f4bdddf144\"" Feb 9 18:33:54.566146 env[1379]: time="2024-02-09T18:33:54.566116092Z" level=info msg="CreateContainer within sandbox \"e0b1b0b791e3d1a364f543ff2544f2e1277176b4e08839ec7f8899f4bdddf144\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:33:55.024031 env[1379]: time="2024-02-09T18:33:55.023952956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:33:55.024031 env[1379]: time="2024-02-09T18:33:55.023993357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:33:55.024405 env[1379]: time="2024-02-09T18:33:55.024003517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:33:55.024405 env[1379]: time="2024-02-09T18:33:55.024160481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e349a267f5134f4fed3885ffb4d23fc595d468dc80eb8dfacf140461bb7a9ba1 pid=2243 runtime=io.containerd.runc.v2 Feb 9 18:33:55.034311 systemd[1]: Started cri-containerd-e349a267f5134f4fed3885ffb4d23fc595d468dc80eb8dfacf140461bb7a9ba1.scope. Feb 9 18:33:55.038456 kubelet[2116]: W0209 18:33:55.037829 2116 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:55.038456 kubelet[2116]: E0209 18:33:55.037872 2116 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Feb 9 18:33:55.064393 env[1379]: time="2024-02-09T18:33:55.064353148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-de7ead93d8,Uid:0d8707c14da3f3bf46e8c2cdf9b18dfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"e349a267f5134f4fed3885ffb4d23fc595d468dc80eb8dfacf140461bb7a9ba1\"" Feb 9 18:33:55.065668 env[1379]: time="2024-02-09T18:33:55.065642734Z" level=info msg="CreateContainer within sandbox \"aeee52136d0288563ac794b8f398131dce5e3357a463108139b09162ed9a619f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"01f816c05bd4986bdaeca51d642a87eca9a8f558b8d6769e5461cb9cc38c9251\"" Feb 9 18:33:55.066235 env[1379]: time="2024-02-09T18:33:55.066212345Z" level=info msg="StartContainer for \"01f816c05bd4986bdaeca51d642a87eca9a8f558b8d6769e5461cb9cc38c9251\"" Feb 9 18:33:55.069643 env[1379]: time="2024-02-09T18:33:55.069615891Z" level=info msg="CreateContainer within sandbox \"e349a267f5134f4fed3885ffb4d23fc595d468dc80eb8dfacf140461bb7a9ba1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:33:55.082286 systemd[1]: Started cri-containerd-01f816c05bd4986bdaeca51d642a87eca9a8f558b8d6769e5461cb9cc38c9251.scope. Feb 9 18:33:55.157205 kubelet[2116]: E0209 18:33:55.157167 2116 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-de7ead93d8?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="3.2s" Feb 9 18:33:55.222234 env[1379]: time="2024-02-09T18:33:55.222180602Z" level=info msg="StartContainer for \"01f816c05bd4986bdaeca51d642a87eca9a8f558b8d6769e5461cb9cc38c9251\" returns successfully" Feb 9 18:33:55.268511 kubelet[2116]: I0209 18:33:55.268418 2116 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:55.311791 env[1379]: time="2024-02-09T18:33:55.311673036Z" level=info msg="CreateContainer within sandbox \"e0b1b0b791e3d1a364f543ff2544f2e1277176b4e08839ec7f8899f4bdddf144\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44\"" Feb 9 18:33:55.312983 env[1379]: time="2024-02-09T18:33:55.312958981Z" level=info msg="StartContainer for \"333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44\"" Feb 9 18:33:55.332265 systemd[1]: run-containerd-runc-k8s.io-333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44-runc.5Rb07h.mount: Deactivated successfully. Feb 9 18:33:55.336711 systemd[1]: Started cri-containerd-333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44.scope. Feb 9 18:33:55.415347 env[1379]: time="2024-02-09T18:33:55.415288547Z" level=info msg="StartContainer for \"333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44\" returns successfully" Feb 9 18:33:55.568847 env[1379]: time="2024-02-09T18:33:55.568729714Z" level=info msg="CreateContainer within sandbox \"e349a267f5134f4fed3885ffb4d23fc595d468dc80eb8dfacf140461bb7a9ba1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc\"" Feb 9 18:33:55.569900 env[1379]: time="2024-02-09T18:33:55.569862097Z" level=info msg="StartContainer for \"52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc\"" Feb 9 18:33:55.592450 systemd[1]: Started cri-containerd-52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc.scope. Feb 9 18:33:55.711921 env[1379]: time="2024-02-09T18:33:55.711872240Z" level=info msg="StartContainer for \"52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc\" returns successfully" Feb 9 18:33:58.137961 kubelet[2116]: I0209 18:33:58.137921 2116 apiserver.go:52] "Watching apiserver" Feb 9 18:33:58.152183 kubelet[2116]: I0209 18:33:58.152150 2116 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:33:58.180094 kubelet[2116]: I0209 18:33:58.180060 2116 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:33:58.322762 kubelet[2116]: E0209 18:33:58.322730 2116 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:00.572158 systemd[1]: Reloading. Feb 9 18:34:00.605506 kubelet[2116]: W0209 18:34:00.605485 2116 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:34:00.664978 /usr/lib/systemd/system-generators/torcx-generator[2416]: time="2024-02-09T18:34:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:00.669114 /usr/lib/systemd/system-generators/torcx-generator[2416]: time="2024-02-09T18:34:00Z" level=info msg="torcx already run" Feb 9 18:34:00.780301 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:00.780487 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:00.798022 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:00.907565 systemd[1]: Stopping kubelet.service... Feb 9 18:34:00.921872 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:34:00.922061 systemd[1]: Stopped kubelet.service. Feb 9 18:34:00.923749 systemd[1]: Started kubelet.service. Feb 9 18:34:00.989916 kubelet[2475]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:00.990260 kubelet[2475]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:00.990312 kubelet[2475]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:00.990474 kubelet[2475]: I0209 18:34:00.990411 2475 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:34:00.995236 kubelet[2475]: I0209 18:34:00.995200 2475 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 18:34:00.995236 kubelet[2475]: I0209 18:34:00.995226 2475 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:34:00.995585 kubelet[2475]: I0209 18:34:00.995566 2475 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 18:34:00.997875 kubelet[2475]: I0209 18:34:00.997852 2475 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:34:00.998879 kubelet[2475]: I0209 18:34:00.998861 2475 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:01.003528 kubelet[2475]: W0209 18:34:01.003511 2475 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:34:01.004181 kubelet[2475]: I0209 18:34:01.004160 2475 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:34:01.004455 kubelet[2475]: I0209 18:34:01.004415 2475 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:34:01.004698 kubelet[2475]: I0209 18:34:01.004679 2475 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 18:34:01.004838 kubelet[2475]: I0209 18:34:01.004825 2475 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 18:34:01.004908 kubelet[2475]: I0209 18:34:01.004899 2475 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 18:34:01.004984 kubelet[2475]: I0209 18:34:01.004975 2475 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:01.005184 kubelet[2475]: I0209 18:34:01.005170 2475 kubelet.go:393] "Attempting to sync node with API server" Feb 9 18:34:01.005269 kubelet[2475]: I0209 18:34:01.005258 2475 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:34:01.005342 kubelet[2475]: I0209 18:34:01.005333 2475 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:34:01.005473 kubelet[2475]: I0209 18:34:01.005460 2475 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:34:01.006951 kubelet[2475]: I0209 18:34:01.006934 2475 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:34:01.007714 kubelet[2475]: I0209 18:34:01.007698 2475 server.go:1232] "Started kubelet" Feb 9 18:34:01.010172 kubelet[2475]: I0209 18:34:01.010140 2475 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:34:01.014563 kubelet[2475]: I0209 18:34:01.014542 2475 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:34:01.015283 kubelet[2475]: I0209 18:34:01.015263 2475 server.go:462] "Adding debug handlers to kubelet server" Feb 9 18:34:01.016356 kubelet[2475]: I0209 18:34:01.016340 2475 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:34:01.016651 kubelet[2475]: I0209 18:34:01.016635 2475 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 18:34:01.017998 kubelet[2475]: I0209 18:34:01.017979 2475 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 18:34:01.025848 kubelet[2475]: I0209 18:34:01.025814 2475 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:34:01.025992 kubelet[2475]: I0209 18:34:01.025971 2475 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 18:34:01.027985 kubelet[2475]: E0209 18:34:01.027965 2475 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:34:01.028085 kubelet[2475]: E0209 18:34:01.028075 2475 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:34:01.033699 kubelet[2475]: I0209 18:34:01.032929 2475 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 18:34:01.038181 kubelet[2475]: I0209 18:34:01.038148 2475 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 18:34:01.038181 kubelet[2475]: I0209 18:34:01.038182 2475 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 18:34:01.038291 kubelet[2475]: I0209 18:34:01.038203 2475 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 18:34:01.038291 kubelet[2475]: E0209 18:34:01.038252 2475 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:34:01.039886 sudo[2495]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 18:34:01.040092 sudo[2495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 18:34:01.121784 kubelet[2475]: I0209 18:34:01.121520 2475 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.140797 kubelet[2475]: E0209 18:34:01.140758 2475 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 18:34:01.141508 kubelet[2475]: I0209 18:34:01.141487 2475 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.141581 kubelet[2475]: I0209 18:34:01.141564 2475 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.159283 kubelet[2475]: I0209 18:34:01.159169 2475 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:34:01.159283 kubelet[2475]: I0209 18:34:01.159210 2475 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:34:01.159283 kubelet[2475]: I0209 18:34:01.159227 2475 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:01.160654 kubelet[2475]: I0209 18:34:01.160622 2475 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:34:01.160750 kubelet[2475]: I0209 18:34:01.160660 2475 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 18:34:01.160750 kubelet[2475]: I0209 18:34:01.160668 2475 policy_none.go:49] "None policy: Start" Feb 9 18:34:01.161382 kubelet[2475]: I0209 18:34:01.161348 2475 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:34:01.161382 kubelet[2475]: I0209 18:34:01.161384 2475 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:34:01.161573 kubelet[2475]: I0209 18:34:01.161544 2475 state_mem.go:75] "Updated machine memory state" Feb 9 18:34:01.165204 kubelet[2475]: I0209 18:34:01.165172 2475 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:34:01.166141 kubelet[2475]: I0209 18:34:01.165791 2475 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:34:01.341566 kubelet[2475]: I0209 18:34:01.341527 2475 topology_manager.go:215] "Topology Admit Handler" podUID="19581d2a014b16e02647601c2f1583e0" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.341710 kubelet[2475]: I0209 18:34:01.341632 2475 topology_manager.go:215] "Topology Admit Handler" podUID="1e7d23714bba59664f2691dcc93e4dd1" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.341710 kubelet[2475]: I0209 18:34:01.341669 2475 topology_manager.go:215] "Topology Admit Handler" podUID="0d8707c14da3f3bf46e8c2cdf9b18dfa" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.350206 kubelet[2475]: W0209 18:34:01.350170 2475 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:34:01.350341 kubelet[2475]: W0209 18:34:01.350240 2475 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:34:01.350341 kubelet[2475]: W0209 18:34:01.350275 2475 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 18:34:01.350460 kubelet[2475]: E0209 18:34:01.350442 2475 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-de7ead93d8\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527107 kubelet[2475]: I0209 18:34:01.527003 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527107 kubelet[2475]: I0209 18:34:01.527053 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527107 kubelet[2475]: I0209 18:34:01.527073 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d8707c14da3f3bf46e8c2cdf9b18dfa-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-de7ead93d8\" (UID: \"0d8707c14da3f3bf46e8c2cdf9b18dfa\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527728 kubelet[2475]: I0209 18:34:01.527701 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19581d2a014b16e02647601c2f1583e0-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-de7ead93d8\" (UID: \"19581d2a014b16e02647601c2f1583e0\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527789 kubelet[2475]: I0209 18:34:01.527747 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19581d2a014b16e02647601c2f1583e0-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-de7ead93d8\" (UID: \"19581d2a014b16e02647601c2f1583e0\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527789 kubelet[2475]: I0209 18:34:01.527780 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19581d2a014b16e02647601c2f1583e0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-de7ead93d8\" (UID: \"19581d2a014b16e02647601c2f1583e0\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527843 kubelet[2475]: I0209 18:34:01.527799 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527843 kubelet[2475]: I0209 18:34:01.527818 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.527843 kubelet[2475]: I0209 18:34:01.527836 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e7d23714bba59664f2691dcc93e4dd1-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-de7ead93d8\" (UID: \"1e7d23714bba59664f2691dcc93e4dd1\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" Feb 9 18:34:01.606192 sudo[2495]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:02.005844 kubelet[2475]: I0209 18:34:02.005803 2475 apiserver.go:52] "Watching apiserver" Feb 9 18:34:02.026253 kubelet[2475]: I0209 18:34:02.026222 2475 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:34:02.135369 kubelet[2475]: I0209 18:34:02.135336 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-de7ead93d8" podStartSLOduration=1.135274407 podCreationTimestamp="2024-02-09 18:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:02.12760856 +0000 UTC m=+1.199636751" watchObservedRunningTime="2024-02-09 18:34:02.135274407 +0000 UTC m=+1.207302598" Feb 9 18:34:02.142331 kubelet[2475]: I0209 18:34:02.142300 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" podStartSLOduration=1.142267484 podCreationTimestamp="2024-02-09 18:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:02.135974659 +0000 UTC m=+1.208002850" watchObservedRunningTime="2024-02-09 18:34:02.142267484 +0000 UTC m=+1.214295675" Feb 9 18:34:02.150001 kubelet[2475]: I0209 18:34:02.149973 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-de7ead93d8" podStartSLOduration=2.149927411 podCreationTimestamp="2024-02-09 18:34:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:02.142610369 +0000 UTC m=+1.214638560" watchObservedRunningTime="2024-02-09 18:34:02.149927411 +0000 UTC m=+1.221955602" Feb 9 18:34:02.920226 sudo[1710]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:02.987496 sshd[1707]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:02.990112 systemd[1]: sshd@4-10.200.20.37:22-10.200.12.6:58142.service: Deactivated successfully. Feb 9 18:34:02.990843 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:34:02.991017 systemd[1]: session-7.scope: Consumed 5.825s CPU time. Feb 9 18:34:02.991471 systemd-logind[1364]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:34:02.992514 systemd-logind[1364]: Removed session 7. Feb 9 18:34:15.598314 kubelet[2475]: I0209 18:34:15.598278 2475 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:34:15.598802 env[1379]: time="2024-02-09T18:34:15.598763156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:34:15.599023 kubelet[2475]: I0209 18:34:15.598991 2475 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:34:16.497572 kubelet[2475]: I0209 18:34:16.497534 2475 topology_manager.go:215] "Topology Admit Handler" podUID="4a4efe04-4c2c-4947-8f7a-b2c655769ee4" podNamespace="kube-system" podName="kube-proxy-9rfbz" Feb 9 18:34:16.502110 systemd[1]: Created slice kubepods-besteffort-pod4a4efe04_4c2c_4947_8f7a_b2c655769ee4.slice. Feb 9 18:34:16.522088 kubelet[2475]: I0209 18:34:16.522056 2475 topology_manager.go:215] "Topology Admit Handler" podUID="c244e405-4459-48aa-a762-86ea96854b16" podNamespace="kube-system" podName="cilium-64st8" Feb 9 18:34:16.526849 systemd[1]: Created slice kubepods-burstable-podc244e405_4459_48aa_a762_86ea96854b16.slice. Feb 9 18:34:16.596961 kubelet[2475]: I0209 18:34:16.596930 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-hostproc\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.597172 kubelet[2475]: I0209 18:34:16.597161 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-xtables-lock\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.597273 kubelet[2475]: I0209 18:34:16.597264 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a4efe04-4c2c-4947-8f7a-b2c655769ee4-kube-proxy\") pod \"kube-proxy-9rfbz\" (UID: \"4a4efe04-4c2c-4947-8f7a-b2c655769ee4\") " pod="kube-system/kube-proxy-9rfbz" Feb 9 18:34:16.597376 kubelet[2475]: I0209 18:34:16.597367 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a4efe04-4c2c-4947-8f7a-b2c655769ee4-xtables-lock\") pod \"kube-proxy-9rfbz\" (UID: \"4a4efe04-4c2c-4947-8f7a-b2c655769ee4\") " pod="kube-system/kube-proxy-9rfbz" Feb 9 18:34:16.597504 kubelet[2475]: I0209 18:34:16.597493 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-bpf-maps\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.597631 kubelet[2475]: I0209 18:34:16.597620 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cni-path\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.597728 kubelet[2475]: I0209 18:34:16.597719 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c244e405-4459-48aa-a762-86ea96854b16-clustermesh-secrets\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.597824 kubelet[2475]: I0209 18:34:16.597815 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-run\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.597919 kubelet[2475]: I0209 18:34:16.597910 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-etc-cni-netd\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.598009 kubelet[2475]: I0209 18:34:16.598001 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-net\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.598127 kubelet[2475]: I0209 18:34:16.598118 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-hubble-tls\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.598227 kubelet[2475]: I0209 18:34:16.598219 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzv9x\" (UniqueName: \"kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-kube-api-access-mzv9x\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.598312 kubelet[2475]: I0209 18:34:16.598304 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c244e405-4459-48aa-a762-86ea96854b16-cilium-config-path\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.598403 kubelet[2475]: I0209 18:34:16.598395 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tdbb\" (UniqueName: \"kubernetes.io/projected/4a4efe04-4c2c-4947-8f7a-b2c655769ee4-kube-api-access-8tdbb\") pod \"kube-proxy-9rfbz\" (UID: \"4a4efe04-4c2c-4947-8f7a-b2c655769ee4\") " pod="kube-system/kube-proxy-9rfbz" Feb 9 18:34:16.598747 kubelet[2475]: I0209 18:34:16.598735 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-cgroup\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.598850 kubelet[2475]: I0209 18:34:16.598840 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-lib-modules\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.598938 kubelet[2475]: I0209 18:34:16.598928 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-kernel\") pod \"cilium-64st8\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " pod="kube-system/cilium-64st8" Feb 9 18:34:16.599025 kubelet[2475]: I0209 18:34:16.599016 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a4efe04-4c2c-4947-8f7a-b2c655769ee4-lib-modules\") pod \"kube-proxy-9rfbz\" (UID: \"4a4efe04-4c2c-4947-8f7a-b2c655769ee4\") " pod="kube-system/kube-proxy-9rfbz" Feb 9 18:34:16.609875 kubelet[2475]: I0209 18:34:16.609839 2475 topology_manager.go:215] "Topology Admit Handler" podUID="29a28ca4-ce10-4fd8-9e67-6103f190f69a" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-zrkw7" Feb 9 18:34:16.614332 systemd[1]: Created slice kubepods-besteffort-pod29a28ca4_ce10_4fd8_9e67_6103f190f69a.slice. Feb 9 18:34:16.699724 kubelet[2475]: I0209 18:34:16.699679 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29a28ca4-ce10-4fd8-9e67-6103f190f69a-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-zrkw7\" (UID: \"29a28ca4-ce10-4fd8-9e67-6103f190f69a\") " pod="kube-system/cilium-operator-6bc8ccdb58-zrkw7" Feb 9 18:34:16.699724 kubelet[2475]: I0209 18:34:16.699731 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rj6x\" (UniqueName: \"kubernetes.io/projected/29a28ca4-ce10-4fd8-9e67-6103f190f69a-kube-api-access-2rj6x\") pod \"cilium-operator-6bc8ccdb58-zrkw7\" (UID: \"29a28ca4-ce10-4fd8-9e67-6103f190f69a\") " pod="kube-system/cilium-operator-6bc8ccdb58-zrkw7" Feb 9 18:34:16.810117 env[1379]: time="2024-02-09T18:34:16.810007924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rfbz,Uid:4a4efe04-4c2c-4947-8f7a-b2c655769ee4,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:16.830169 env[1379]: time="2024-02-09T18:34:16.830117972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64st8,Uid:c244e405-4459-48aa-a762-86ea96854b16,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:16.851329 env[1379]: time="2024-02-09T18:34:16.850989910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:16.851329 env[1379]: time="2024-02-09T18:34:16.851027510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:16.851329 env[1379]: time="2024-02-09T18:34:16.851037990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:16.851610 env[1379]: time="2024-02-09T18:34:16.851213712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fdff6b3e18edc0c82201e07cc9e5d5cf66593249a2e87c4d47a0cf99fe9994d pid=2554 runtime=io.containerd.runc.v2 Feb 9 18:34:16.864042 systemd[1]: Started cri-containerd-5fdff6b3e18edc0c82201e07cc9e5d5cf66593249a2e87c4d47a0cf99fe9994d.scope. Feb 9 18:34:16.877051 env[1379]: time="2024-02-09T18:34:16.876989310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:16.877184 env[1379]: time="2024-02-09T18:34:16.877030191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:16.877184 env[1379]: time="2024-02-09T18:34:16.877040431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:16.877184 env[1379]: time="2024-02-09T18:34:16.877144472Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c pid=2586 runtime=io.containerd.runc.v2 Feb 9 18:34:16.891623 systemd[1]: Started cri-containerd-9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c.scope. Feb 9 18:34:16.899109 env[1379]: time="2024-02-09T18:34:16.899065983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rfbz,Uid:4a4efe04-4c2c-4947-8f7a-b2c655769ee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fdff6b3e18edc0c82201e07cc9e5d5cf66593249a2e87c4d47a0cf99fe9994d\"" Feb 9 18:34:16.905533 env[1379]: time="2024-02-09T18:34:16.905329020Z" level=info msg="CreateContainer within sandbox \"5fdff6b3e18edc0c82201e07cc9e5d5cf66593249a2e87c4d47a0cf99fe9994d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:34:16.918055 env[1379]: time="2024-02-09T18:34:16.917652412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-zrkw7,Uid:29a28ca4-ce10-4fd8-9e67-6103f190f69a,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:16.936163 env[1379]: time="2024-02-09T18:34:16.936101920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-64st8,Uid:c244e405-4459-48aa-a762-86ea96854b16,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\"" Feb 9 18:34:16.940023 env[1379]: time="2024-02-09T18:34:16.939915287Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:34:16.978004 env[1379]: time="2024-02-09T18:34:16.977928556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:16.978188 env[1379]: time="2024-02-09T18:34:16.977972437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:16.978287 env[1379]: time="2024-02-09T18:34:16.978178319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:16.978582 env[1379]: time="2024-02-09T18:34:16.978535404Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3 pid=2635 runtime=io.containerd.runc.v2 Feb 9 18:34:16.980586 env[1379]: time="2024-02-09T18:34:16.980528028Z" level=info msg="CreateContainer within sandbox \"5fdff6b3e18edc0c82201e07cc9e5d5cf66593249a2e87c4d47a0cf99fe9994d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9b9841d220a1cd66e1e32c8ef67590af506ca81fee8b351268d1be8b39aeacc8\"" Feb 9 18:34:16.983049 env[1379]: time="2024-02-09T18:34:16.983015499Z" level=info msg="StartContainer for \"9b9841d220a1cd66e1e32c8ef67590af506ca81fee8b351268d1be8b39aeacc8\"" Feb 9 18:34:16.993164 systemd[1]: Started cri-containerd-62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3.scope. Feb 9 18:34:17.012916 systemd[1]: Started cri-containerd-9b9841d220a1cd66e1e32c8ef67590af506ca81fee8b351268d1be8b39aeacc8.scope. Feb 9 18:34:17.054750 env[1379]: time="2024-02-09T18:34:17.054705491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-zrkw7,Uid:29a28ca4-ce10-4fd8-9e67-6103f190f69a,Namespace:kube-system,Attempt:0,} returns sandbox id \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\"" Feb 9 18:34:17.071841 env[1379]: time="2024-02-09T18:34:17.071742017Z" level=info msg="StartContainer for \"9b9841d220a1cd66e1e32c8ef67590af506ca81fee8b351268d1be8b39aeacc8\" returns successfully" Feb 9 18:34:22.057794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683281519.mount: Deactivated successfully. Feb 9 18:34:24.228411 env[1379]: time="2024-02-09T18:34:24.228369269Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:24.239386 env[1379]: time="2024-02-09T18:34:24.239349506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:24.244921 env[1379]: time="2024-02-09T18:34:24.244888245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:24.245484 env[1379]: time="2024-02-09T18:34:24.245450610Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:34:24.246650 env[1379]: time="2024-02-09T18:34:24.246620703Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:34:24.248596 env[1379]: time="2024-02-09T18:34:24.248565684Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:34:24.281570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668637419.mount: Deactivated successfully. Feb 9 18:34:24.286786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359868178.mount: Deactivated successfully. Feb 9 18:34:24.306413 env[1379]: time="2024-02-09T18:34:24.306371138Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3\"" Feb 9 18:34:24.308548 env[1379]: time="2024-02-09T18:34:24.307182066Z" level=info msg="StartContainer for \"65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3\"" Feb 9 18:34:24.325990 systemd[1]: Started cri-containerd-65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3.scope. Feb 9 18:34:24.357751 env[1379]: time="2024-02-09T18:34:24.357657843Z" level=info msg="StartContainer for \"65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3\" returns successfully" Feb 9 18:34:24.361958 systemd[1]: cri-containerd-65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3.scope: Deactivated successfully. Feb 9 18:34:25.180302 kubelet[2475]: I0209 18:34:25.180273 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9rfbz" podStartSLOduration=9.180223468 podCreationTimestamp="2024-02-09 18:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:17.15710909 +0000 UTC m=+16.229137281" watchObservedRunningTime="2024-02-09 18:34:25.180223468 +0000 UTC m=+24.252251659" Feb 9 18:34:25.279481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3-rootfs.mount: Deactivated successfully. Feb 9 18:34:26.067917 env[1379]: time="2024-02-09T18:34:26.067873401Z" level=info msg="shim disconnected" id=65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3 Feb 9 18:34:26.068293 env[1379]: time="2024-02-09T18:34:26.068272725Z" level=warning msg="cleaning up after shim disconnected" id=65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3 namespace=k8s.io Feb 9 18:34:26.068357 env[1379]: time="2024-02-09T18:34:26.068344166Z" level=info msg="cleaning up dead shim" Feb 9 18:34:26.075406 env[1379]: time="2024-02-09T18:34:26.075366078Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2870 runtime=io.containerd.runc.v2\n" Feb 9 18:34:26.172759 env[1379]: time="2024-02-09T18:34:26.172719997Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:34:26.217782 env[1379]: time="2024-02-09T18:34:26.217729698Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace\"" Feb 9 18:34:26.219477 env[1379]: time="2024-02-09T18:34:26.218463746Z" level=info msg="StartContainer for \"f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace\"" Feb 9 18:34:26.240611 systemd[1]: Started cri-containerd-f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace.scope. Feb 9 18:34:26.277608 env[1379]: time="2024-02-09T18:34:26.277485751Z" level=info msg="StartContainer for \"f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace\" returns successfully" Feb 9 18:34:26.280935 systemd[1]: run-containerd-runc-k8s.io-f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace-runc.DPFr5p.mount: Deactivated successfully. Feb 9 18:34:26.281819 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:34:26.282004 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:34:26.283913 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:34:26.285567 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:34:26.290229 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:34:26.295885 systemd[1]: cri-containerd-f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace.scope: Deactivated successfully. Feb 9 18:34:26.298967 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:34:26.311859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace-rootfs.mount: Deactivated successfully. Feb 9 18:34:26.329310 env[1379]: time="2024-02-09T18:34:26.328584956Z" level=info msg="shim disconnected" id=f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace Feb 9 18:34:26.329562 env[1379]: time="2024-02-09T18:34:26.329541965Z" level=warning msg="cleaning up after shim disconnected" id=f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace namespace=k8s.io Feb 9 18:34:26.329677 env[1379]: time="2024-02-09T18:34:26.329661767Z" level=info msg="cleaning up dead shim" Feb 9 18:34:26.336768 env[1379]: time="2024-02-09T18:34:26.336743319Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2934 runtime=io.containerd.runc.v2\n" Feb 9 18:34:27.179464 env[1379]: time="2024-02-09T18:34:27.178419162Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:34:27.320581 env[1379]: time="2024-02-09T18:34:27.320535595Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29\"" Feb 9 18:34:27.323103 env[1379]: time="2024-02-09T18:34:27.323077181Z" level=info msg="StartContainer for \"4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29\"" Feb 9 18:34:27.345682 systemd[1]: run-containerd-runc-k8s.io-4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29-runc.3H3r47.mount: Deactivated successfully. Feb 9 18:34:27.348857 systemd[1]: Started cri-containerd-4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29.scope. Feb 9 18:34:27.375934 systemd[1]: cri-containerd-4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29.scope: Deactivated successfully. Feb 9 18:34:27.380723 env[1379]: time="2024-02-09T18:34:27.380674762Z" level=info msg="StartContainer for \"4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29\" returns successfully" Feb 9 18:34:27.398967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29-rootfs.mount: Deactivated successfully. Feb 9 18:34:27.414399 env[1379]: time="2024-02-09T18:34:27.414355381Z" level=info msg="shim disconnected" id=4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29 Feb 9 18:34:27.414642 env[1379]: time="2024-02-09T18:34:27.414622744Z" level=warning msg="cleaning up after shim disconnected" id=4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29 namespace=k8s.io Feb 9 18:34:27.414726 env[1379]: time="2024-02-09T18:34:27.414712385Z" level=info msg="cleaning up dead shim" Feb 9 18:34:27.422269 env[1379]: time="2024-02-09T18:34:27.422231461Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2993 runtime=io.containerd.runc.v2\n" Feb 9 18:34:28.181060 env[1379]: time="2024-02-09T18:34:28.181011602Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:34:28.222052 env[1379]: time="2024-02-09T18:34:28.222003208Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9\"" Feb 9 18:34:28.224524 env[1379]: time="2024-02-09T18:34:28.224486193Z" level=info msg="StartContainer for \"15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9\"" Feb 9 18:34:28.244549 systemd[1]: Started cri-containerd-15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9.scope. Feb 9 18:34:28.274958 systemd[1]: cri-containerd-15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9.scope: Deactivated successfully. Feb 9 18:34:28.277777 env[1379]: time="2024-02-09T18:34:28.277685520Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc244e405_4459_48aa_a762_86ea96854b16.slice/cri-containerd-15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9.scope/memory.events\": no such file or directory" Feb 9 18:34:28.290232 env[1379]: time="2024-02-09T18:34:28.290183844Z" level=info msg="StartContainer for \"15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9\" returns successfully" Feb 9 18:34:28.319208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9-rootfs.mount: Deactivated successfully. Feb 9 18:34:28.372773 env[1379]: time="2024-02-09T18:34:28.372714503Z" level=info msg="shim disconnected" id=15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9 Feb 9 18:34:28.373073 env[1379]: time="2024-02-09T18:34:28.373054306Z" level=warning msg="cleaning up after shim disconnected" id=15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9 namespace=k8s.io Feb 9 18:34:28.373160 env[1379]: time="2024-02-09T18:34:28.373145747Z" level=info msg="cleaning up dead shim" Feb 9 18:34:28.380538 env[1379]: time="2024-02-09T18:34:28.380493580Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3049 runtime=io.containerd.runc.v2\n" Feb 9 18:34:28.663157 env[1379]: time="2024-02-09T18:34:28.663110302Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:28.668518 env[1379]: time="2024-02-09T18:34:28.668483315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:28.671506 env[1379]: time="2024-02-09T18:34:28.671468905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:28.672076 env[1379]: time="2024-02-09T18:34:28.672045431Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:34:28.676945 env[1379]: time="2024-02-09T18:34:28.676905239Z" level=info msg="CreateContainer within sandbox \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:34:28.698580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990499661.mount: Deactivated successfully. Feb 9 18:34:28.716326 env[1379]: time="2024-02-09T18:34:28.715644183Z" level=info msg="CreateContainer within sandbox \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\"" Feb 9 18:34:28.717945 env[1379]: time="2024-02-09T18:34:28.717754764Z" level=info msg="StartContainer for \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\"" Feb 9 18:34:28.734965 systemd[1]: Started cri-containerd-d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b.scope. Feb 9 18:34:28.768037 env[1379]: time="2024-02-09T18:34:28.767972462Z" level=info msg="StartContainer for \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\" returns successfully" Feb 9 18:34:29.196599 env[1379]: time="2024-02-09T18:34:29.196336077Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:34:29.242649 env[1379]: time="2024-02-09T18:34:29.242599888Z" level=info msg="CreateContainer within sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\"" Feb 9 18:34:29.243165 env[1379]: time="2024-02-09T18:34:29.243135853Z" level=info msg="StartContainer for \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\"" Feb 9 18:34:29.265343 systemd[1]: Started cri-containerd-88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6.scope. Feb 9 18:34:29.282002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551369110.mount: Deactivated successfully. Feb 9 18:34:29.322668 env[1379]: time="2024-02-09T18:34:29.322608308Z" level=info msg="StartContainer for \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\" returns successfully" Feb 9 18:34:29.620361 kubelet[2475]: I0209 18:34:29.619613 2475 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:34:29.623453 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:34:29.650074 kubelet[2475]: I0209 18:34:29.650032 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-zrkw7" podStartSLOduration=2.0334036109999998 podCreationTimestamp="2024-02-09 18:34:16 +0000 UTC" firstStartedPulling="2024-02-09 18:34:17.055717023 +0000 UTC m=+16.127745214" lastFinishedPulling="2024-02-09 18:34:28.672305393 +0000 UTC m=+27.744333584" observedRunningTime="2024-02-09 18:34:29.25615654 +0000 UTC m=+28.328184731" watchObservedRunningTime="2024-02-09 18:34:29.649991981 +0000 UTC m=+28.722020172" Feb 9 18:34:29.650316 kubelet[2475]: I0209 18:34:29.650267 2475 topology_manager.go:215] "Topology Admit Handler" podUID="5c955d98-2ca1-415f-bf19-256af9111cda" podNamespace="kube-system" podName="coredns-5dd5756b68-hdj8g" Feb 9 18:34:29.655032 systemd[1]: Created slice kubepods-burstable-pod5c955d98_2ca1_415f_bf19_256af9111cda.slice. Feb 9 18:34:29.655939 kubelet[2475]: I0209 18:34:29.655900 2475 topology_manager.go:215] "Topology Admit Handler" podUID="23d0ebc6-63e0-442d-a42f-648bc09f1d0b" podNamespace="kube-system" podName="coredns-5dd5756b68-wgdx5" Feb 9 18:34:29.663201 systemd[1]: Created slice kubepods-burstable-pod23d0ebc6_63e0_442d_a42f_648bc09f1d0b.slice. Feb 9 18:34:29.818882 kubelet[2475]: I0209 18:34:29.818845 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6sr\" (UniqueName: \"kubernetes.io/projected/5c955d98-2ca1-415f-bf19-256af9111cda-kube-api-access-mh6sr\") pod \"coredns-5dd5756b68-hdj8g\" (UID: \"5c955d98-2ca1-415f-bf19-256af9111cda\") " pod="kube-system/coredns-5dd5756b68-hdj8g" Feb 9 18:34:29.819020 kubelet[2475]: I0209 18:34:29.818892 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c955d98-2ca1-415f-bf19-256af9111cda-config-volume\") pod \"coredns-5dd5756b68-hdj8g\" (UID: \"5c955d98-2ca1-415f-bf19-256af9111cda\") " pod="kube-system/coredns-5dd5756b68-hdj8g" Feb 9 18:34:29.819020 kubelet[2475]: I0209 18:34:29.818919 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23d0ebc6-63e0-442d-a42f-648bc09f1d0b-config-volume\") pod \"coredns-5dd5756b68-wgdx5\" (UID: \"23d0ebc6-63e0-442d-a42f-648bc09f1d0b\") " pod="kube-system/coredns-5dd5756b68-wgdx5" Feb 9 18:34:29.819020 kubelet[2475]: I0209 18:34:29.818941 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg86f\" (UniqueName: \"kubernetes.io/projected/23d0ebc6-63e0-442d-a42f-648bc09f1d0b-kube-api-access-lg86f\") pod \"coredns-5dd5756b68-wgdx5\" (UID: \"23d0ebc6-63e0-442d-a42f-648bc09f1d0b\") " pod="kube-system/coredns-5dd5756b68-wgdx5" Feb 9 18:34:29.961114 env[1379]: time="2024-02-09T18:34:29.961075134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hdj8g,Uid:5c955d98-2ca1-415f-bf19-256af9111cda,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:29.968717 env[1379]: time="2024-02-09T18:34:29.968667009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wgdx5,Uid:23d0ebc6-63e0-442d-a42f-648bc09f1d0b,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:30.217160 kubelet[2475]: I0209 18:34:30.216795 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-64st8" podStartSLOduration=6.908213635 podCreationTimestamp="2024-02-09 18:34:16 +0000 UTC" firstStartedPulling="2024-02-09 18:34:16.937358776 +0000 UTC m=+16.009386967" lastFinishedPulling="2024-02-09 18:34:24.245900415 +0000 UTC m=+23.317928606" observedRunningTime="2024-02-09 18:34:30.216136988 +0000 UTC m=+29.288165179" watchObservedRunningTime="2024-02-09 18:34:30.216755274 +0000 UTC m=+29.288783465" Feb 9 18:34:30.228468 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:34:32.661970 systemd-networkd[1526]: cilium_host: Link UP Feb 9 18:34:32.668177 systemd-networkd[1526]: cilium_net: Link UP Feb 9 18:34:32.668186 systemd-networkd[1526]: cilium_net: Gained carrier Feb 9 18:34:32.669028 systemd-networkd[1526]: cilium_host: Gained carrier Feb 9 18:34:32.671489 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:34:32.842261 systemd-networkd[1526]: cilium_vxlan: Link UP Feb 9 18:34:32.842267 systemd-networkd[1526]: cilium_vxlan: Gained carrier Feb 9 18:34:33.164509 kernel: NET: Registered PF_ALG protocol family Feb 9 18:34:33.236587 systemd-networkd[1526]: cilium_net: Gained IPv6LL Feb 9 18:34:33.620641 systemd-networkd[1526]: cilium_host: Gained IPv6LL Feb 9 18:34:33.881518 systemd-networkd[1526]: lxc_health: Link UP Feb 9 18:34:33.896563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:34:33.896605 systemd-networkd[1526]: lxc_health: Gained carrier Feb 9 18:34:34.033620 systemd-networkd[1526]: lxc48e4368f3357: Link UP Feb 9 18:34:34.043533 kernel: eth0: renamed from tmpdbbfe Feb 9 18:34:34.058456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc48e4368f3357: link becomes ready Feb 9 18:34:34.057054 systemd-networkd[1526]: lxc48e4368f3357: Gained carrier Feb 9 18:34:34.072070 systemd-networkd[1526]: lxcb64faae9bc6b: Link UP Feb 9 18:34:34.083478 kernel: eth0: renamed from tmpd3779 Feb 9 18:34:34.105316 systemd-networkd[1526]: lxcb64faae9bc6b: Gained carrier Feb 9 18:34:34.105562 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb64faae9bc6b: link becomes ready Feb 9 18:34:34.388582 systemd-networkd[1526]: cilium_vxlan: Gained IPv6LL Feb 9 18:34:35.285565 systemd-networkd[1526]: lxc48e4368f3357: Gained IPv6LL Feb 9 18:34:35.669575 systemd-networkd[1526]: lxcb64faae9bc6b: Gained IPv6LL Feb 9 18:34:35.732559 systemd-networkd[1526]: lxc_health: Gained IPv6LL Feb 9 18:34:37.649483 env[1379]: time="2024-02-09T18:34:37.646857021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:37.649483 env[1379]: time="2024-02-09T18:34:37.646902461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:37.649483 env[1379]: time="2024-02-09T18:34:37.646915701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:37.649483 env[1379]: time="2024-02-09T18:34:37.648958279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbbfe97b28fda653f26041a1374eb91752257ce67020bb4b489d3eedefd4d782 pid=3635 runtime=io.containerd.runc.v2 Feb 9 18:34:37.660206 env[1379]: time="2024-02-09T18:34:37.659033886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:37.660206 env[1379]: time="2024-02-09T18:34:37.659075846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:37.660206 env[1379]: time="2024-02-09T18:34:37.659086686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:37.664484 env[1379]: time="2024-02-09T18:34:37.661108664Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3779cd972e836bd2dd95fc5c675aeaed954bffdb836b2874b4f5cac72c6afeb pid=3650 runtime=io.containerd.runc.v2 Feb 9 18:34:37.675795 systemd[1]: Started cri-containerd-dbbfe97b28fda653f26041a1374eb91752257ce67020bb4b489d3eedefd4d782.scope. Feb 9 18:34:37.686331 systemd[1]: run-containerd-runc-k8s.io-dbbfe97b28fda653f26041a1374eb91752257ce67020bb4b489d3eedefd4d782-runc.l7h50n.mount: Deactivated successfully. Feb 9 18:34:37.701387 systemd[1]: Started cri-containerd-d3779cd972e836bd2dd95fc5c675aeaed954bffdb836b2874b4f5cac72c6afeb.scope. Feb 9 18:34:37.750731 env[1379]: time="2024-02-09T18:34:37.750680275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wgdx5,Uid:23d0ebc6-63e0-442d-a42f-648bc09f1d0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3779cd972e836bd2dd95fc5c675aeaed954bffdb836b2874b4f5cac72c6afeb\"" Feb 9 18:34:37.756105 env[1379]: time="2024-02-09T18:34:37.756032801Z" level=info msg="CreateContainer within sandbox \"d3779cd972e836bd2dd95fc5c675aeaed954bffdb836b2874b4f5cac72c6afeb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:34:37.761423 env[1379]: time="2024-02-09T18:34:37.761387447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hdj8g,Uid:5c955d98-2ca1-415f-bf19-256af9111cda,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbbfe97b28fda653f26041a1374eb91752257ce67020bb4b489d3eedefd4d782\"" Feb 9 18:34:37.766164 env[1379]: time="2024-02-09T18:34:37.766130208Z" level=info msg="CreateContainer within sandbox \"dbbfe97b28fda653f26041a1374eb91752257ce67020bb4b489d3eedefd4d782\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:34:37.815067 env[1379]: time="2024-02-09T18:34:37.815014909Z" level=info msg="CreateContainer within sandbox \"d3779cd972e836bd2dd95fc5c675aeaed954bffdb836b2874b4f5cac72c6afeb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5b2f669aa1738d9f933828ffb5a7d24d7b6543c093a2743d8b64b9416e06256\"" Feb 9 18:34:37.815948 env[1379]: time="2024-02-09T18:34:37.815922557Z" level=info msg="StartContainer for \"e5b2f669aa1738d9f933828ffb5a7d24d7b6543c093a2743d8b64b9416e06256\"" Feb 9 18:34:37.818450 env[1379]: time="2024-02-09T18:34:37.818381258Z" level=info msg="CreateContainer within sandbox \"dbbfe97b28fda653f26041a1374eb91752257ce67020bb4b489d3eedefd4d782\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9018f741a6506bda8436889cb19812259f8da568bd258ebf4f127180820b6ea\"" Feb 9 18:34:37.818964 env[1379]: time="2024-02-09T18:34:37.818929543Z" level=info msg="StartContainer for \"b9018f741a6506bda8436889cb19812259f8da568bd258ebf4f127180820b6ea\"" Feb 9 18:34:37.835067 systemd[1]: Started cri-containerd-e5b2f669aa1738d9f933828ffb5a7d24d7b6543c093a2743d8b64b9416e06256.scope. Feb 9 18:34:37.857948 systemd[1]: Started cri-containerd-b9018f741a6506bda8436889cb19812259f8da568bd258ebf4f127180820b6ea.scope. Feb 9 18:34:37.904832 env[1379]: time="2024-02-09T18:34:37.904713162Z" level=info msg="StartContainer for \"b9018f741a6506bda8436889cb19812259f8da568bd258ebf4f127180820b6ea\" returns successfully" Feb 9 18:34:37.908249 env[1379]: time="2024-02-09T18:34:37.908189312Z" level=info msg="StartContainer for \"e5b2f669aa1738d9f933828ffb5a7d24d7b6543c093a2743d8b64b9416e06256\" returns successfully" Feb 9 18:34:38.235874 kubelet[2475]: I0209 18:34:38.235762 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wgdx5" podStartSLOduration=22.235727305 podCreationTimestamp="2024-02-09 18:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:38.23275568 +0000 UTC m=+37.304783871" watchObservedRunningTime="2024-02-09 18:34:38.235727305 +0000 UTC m=+37.307755496" Feb 9 18:34:38.279801 kubelet[2475]: I0209 18:34:38.279516 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hdj8g" podStartSLOduration=22.279472196 podCreationTimestamp="2024-02-09 18:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:38.255584794 +0000 UTC m=+37.327612985" watchObservedRunningTime="2024-02-09 18:34:38.279472196 +0000 UTC m=+37.351500387" Feb 9 18:34:45.906275 kubelet[2475]: I0209 18:34:45.906232 2475 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:35:11.021820 update_engine[1366]: I0209 18:35:11.021780 1366 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 18:35:11.021820 update_engine[1366]: I0209 18:35:11.021816 1366 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 18:35:11.022217 update_engine[1366]: I0209 18:35:11.021983 1366 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 18:35:11.022349 update_engine[1366]: I0209 18:35:11.022322 1366 omaha_request_params.cc:62] Current group set to lts Feb 9 18:35:11.022444 update_engine[1366]: I0209 18:35:11.022413 1366 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 18:35:11.022444 update_engine[1366]: I0209 18:35:11.022442 1366 update_attempter.cc:643] Scheduling an action processor start. Feb 9 18:35:11.022513 update_engine[1366]: I0209 18:35:11.022459 1366 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 18:35:11.022513 update_engine[1366]: I0209 18:35:11.022480 1366 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 18:35:11.022856 locksmithd[1457]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 18:35:11.023151 update_engine[1366]: I0209 18:35:11.023126 1366 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 18:35:11.023151 update_engine[1366]: I0209 18:35:11.023144 1366 omaha_request_action.cc:271] Request: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: Feb 9 18:35:11.023151 update_engine[1366]: I0209 18:35:11.023149 1366 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 18:35:11.023983 update_engine[1366]: I0209 18:35:11.023959 1366 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 18:35:11.024167 update_engine[1366]: I0209 18:35:11.024147 1366 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 18:35:11.037389 update_engine[1366]: E0209 18:35:11.037350 1366 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 18:35:11.037514 update_engine[1366]: I0209 18:35:11.037492 1366 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 18:35:21.005022 update_engine[1366]: I0209 18:35:21.004916 1366 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 18:35:21.005337 update_engine[1366]: I0209 18:35:21.005101 1366 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 18:35:21.005337 update_engine[1366]: I0209 18:35:21.005280 1366 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 18:35:21.079545 update_engine[1366]: E0209 18:35:21.079507 1366 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 18:35:21.079683 update_engine[1366]: I0209 18:35:21.079610 1366 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 18:35:31.003683 update_engine[1366]: I0209 18:35:31.003633 1366 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 18:35:31.004066 update_engine[1366]: I0209 18:35:31.003819 1366 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 18:35:31.004066 update_engine[1366]: I0209 18:35:31.003991 1366 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 18:35:31.107308 update_engine[1366]: E0209 18:35:31.107264 1366 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 18:35:31.107470 update_engine[1366]: I0209 18:35:31.107370 1366 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 18:35:41.002197 update_engine[1366]: I0209 18:35:41.002152 1366 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 18:35:41.002554 update_engine[1366]: I0209 18:35:41.002344 1366 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 18:35:41.002554 update_engine[1366]: I0209 18:35:41.002533 1366 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 18:35:41.021217 update_engine[1366]: E0209 18:35:41.021173 1366 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 18:35:41.021365 update_engine[1366]: I0209 18:35:41.021314 1366 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 18:35:41.021365 update_engine[1366]: I0209 18:35:41.021324 1366 omaha_request_action.cc:621] Omaha request response: Feb 9 18:35:41.021460 update_engine[1366]: E0209 18:35:41.021422 1366 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 18:35:41.021490 update_engine[1366]: I0209 18:35:41.021466 1366 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 18:35:41.021490 update_engine[1366]: I0209 18:35:41.021469 1366 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 18:35:41.021490 update_engine[1366]: I0209 18:35:41.021472 1366 update_attempter.cc:306] Processing Done. Feb 9 18:35:41.021490 update_engine[1366]: E0209 18:35:41.021483 1366 update_attempter.cc:619] Update failed. Feb 9 18:35:41.021490 update_engine[1366]: I0209 18:35:41.021486 1366 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 18:35:41.021490 update_engine[1366]: I0209 18:35:41.021489 1366 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 18:35:41.021612 update_engine[1366]: I0209 18:35:41.021493 1366 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 18:35:41.021612 update_engine[1366]: I0209 18:35:41.021553 1366 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 18:35:41.021612 update_engine[1366]: I0209 18:35:41.021570 1366 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 18:35:41.021612 update_engine[1366]: I0209 18:35:41.021574 1366 omaha_request_action.cc:271] Request: Feb 9 18:35:41.021612 update_engine[1366]: Feb 9 18:35:41.021612 update_engine[1366]: Feb 9 18:35:41.021612 update_engine[1366]: Feb 9 18:35:41.021612 update_engine[1366]: Feb 9 18:35:41.021612 update_engine[1366]: Feb 9 18:35:41.021612 update_engine[1366]: Feb 9 18:35:41.021612 update_engine[1366]: I0209 18:35:41.021577 1366 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 18:35:41.021817 update_engine[1366]: I0209 18:35:41.021683 1366 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 18:35:41.021840 update_engine[1366]: I0209 18:35:41.021819 1366 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 18:35:41.022085 locksmithd[1457]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 18:35:41.060664 update_engine[1366]: E0209 18:35:41.060626 1366 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 18:35:41.060799 update_engine[1366]: I0209 18:35:41.060726 1366 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 18:35:41.060799 update_engine[1366]: I0209 18:35:41.060732 1366 omaha_request_action.cc:621] Omaha request response: Feb 9 18:35:41.060799 update_engine[1366]: I0209 18:35:41.060736 1366 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 18:35:41.060799 update_engine[1366]: I0209 18:35:41.060739 1366 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 18:35:41.060799 update_engine[1366]: I0209 18:35:41.060742 1366 update_attempter.cc:306] Processing Done. Feb 9 18:35:41.060799 update_engine[1366]: I0209 18:35:41.060746 1366 update_attempter.cc:310] Error event sent. Feb 9 18:35:41.060799 update_engine[1366]: I0209 18:35:41.060755 1366 update_check_scheduler.cc:74] Next update check in 47m39s Feb 9 18:35:41.061074 locksmithd[1457]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 18:36:24.689535 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.12.6:50488.service. Feb 9 18:36:25.112408 sshd[3810]: Accepted publickey for core from 10.200.12.6 port 50488 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:25.114194 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:25.118717 systemd-logind[1364]: New session 8 of user core. Feb 9 18:36:25.119586 systemd[1]: Started session-8.scope. Feb 9 18:36:25.522679 sshd[3810]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:25.525078 systemd[1]: sshd@5-10.200.20.37:22-10.200.12.6:50488.service: Deactivated successfully. Feb 9 18:36:25.525865 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:36:25.526456 systemd-logind[1364]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:36:25.527327 systemd-logind[1364]: Removed session 8. Feb 9 18:36:30.593063 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.12.6:37368.service. Feb 9 18:36:31.006687 sshd[3823]: Accepted publickey for core from 10.200.12.6 port 37368 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:31.008321 sshd[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:31.012555 systemd[1]: Started session-9.scope. Feb 9 18:36:31.013813 systemd-logind[1364]: New session 9 of user core. Feb 9 18:36:31.372908 sshd[3823]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:31.375765 systemd[1]: sshd@6-10.200.20.37:22-10.200.12.6:37368.service: Deactivated successfully. Feb 9 18:36:31.376536 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:36:31.377116 systemd-logind[1364]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:36:31.377992 systemd-logind[1364]: Removed session 9. Feb 9 18:36:36.443340 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.12.6:37384.service. Feb 9 18:36:36.859641 sshd[3836]: Accepted publickey for core from 10.200.12.6 port 37384 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:36.861242 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:36.865513 systemd[1]: Started session-10.scope. Feb 9 18:36:36.865822 systemd-logind[1364]: New session 10 of user core. Feb 9 18:36:37.233080 sshd[3836]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:37.236015 systemd[1]: sshd@7-10.200.20.37:22-10.200.12.6:37384.service: Deactivated successfully. Feb 9 18:36:37.236209 systemd-logind[1364]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:36:37.236741 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:36:37.237486 systemd-logind[1364]: Removed session 10. Feb 9 18:36:42.299000 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.12.6:48824.service. Feb 9 18:36:42.686633 sshd[3848]: Accepted publickey for core from 10.200.12.6 port 48824 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:42.688197 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:42.692403 systemd[1]: Started session-11.scope. Feb 9 18:36:42.693710 systemd-logind[1364]: New session 11 of user core. Feb 9 18:36:43.030675 sshd[3848]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:43.034296 systemd[1]: sshd@8-10.200.20.37:22-10.200.12.6:48824.service: Deactivated successfully. Feb 9 18:36:43.035043 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:36:43.036127 systemd-logind[1364]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:36:43.037039 systemd-logind[1364]: Removed session 11. Feb 9 18:36:43.101316 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.12.6:48834.service. Feb 9 18:36:43.518563 sshd[3860]: Accepted publickey for core from 10.200.12.6 port 48834 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:43.519843 sshd[3860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:43.523715 systemd-logind[1364]: New session 12 of user core. Feb 9 18:36:43.524180 systemd[1]: Started session-12.scope. Feb 9 18:36:44.489244 sshd[3860]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:44.491719 systemd[1]: sshd@9-10.200.20.37:22-10.200.12.6:48834.service: Deactivated successfully. Feb 9 18:36:44.494060 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:36:44.494752 systemd-logind[1364]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:36:44.495868 systemd-logind[1364]: Removed session 12. Feb 9 18:36:44.553277 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.12.6:48840.service. Feb 9 18:36:44.935903 sshd[3870]: Accepted publickey for core from 10.200.12.6 port 48840 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:44.937485 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:44.940787 systemd-logind[1364]: New session 13 of user core. Feb 9 18:36:44.942232 systemd[1]: Started session-13.scope. Feb 9 18:36:45.288306 sshd[3870]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:45.291243 systemd[1]: sshd@10-10.200.20.37:22-10.200.12.6:48840.service: Deactivated successfully. Feb 9 18:36:45.291978 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:36:45.292377 systemd-logind[1364]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:36:45.293053 systemd-logind[1364]: Removed session 13. Feb 9 18:36:50.351716 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.12.6:34928.service. Feb 9 18:36:50.732850 sshd[3883]: Accepted publickey for core from 10.200.12.6 port 34928 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:50.734186 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:50.738742 systemd[1]: Started session-14.scope. Feb 9 18:36:50.739056 systemd-logind[1364]: New session 14 of user core. Feb 9 18:36:51.075703 sshd[3883]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:51.078904 systemd[1]: sshd@11-10.200.20.37:22-10.200.12.6:34928.service: Deactivated successfully. Feb 9 18:36:51.079071 systemd-logind[1364]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:36:51.079650 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:36:51.080495 systemd-logind[1364]: Removed session 14. Feb 9 18:36:56.140859 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.12.6:34940.service. Feb 9 18:36:56.523239 sshd[3895]: Accepted publickey for core from 10.200.12.6 port 34940 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:56.524844 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:56.529007 systemd[1]: Started session-15.scope. Feb 9 18:36:56.529325 systemd-logind[1364]: New session 15 of user core. Feb 9 18:36:56.866507 sshd[3895]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:56.869793 systemd[1]: sshd@12-10.200.20.37:22-10.200.12.6:34940.service: Deactivated successfully. Feb 9 18:36:56.870554 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:36:56.871152 systemd-logind[1364]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:36:56.872029 systemd-logind[1364]: Removed session 15. Feb 9 18:36:56.931271 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.12.6:34946.service. Feb 9 18:36:57.320823 sshd[3906]: Accepted publickey for core from 10.200.12.6 port 34946 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:57.322379 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:57.326635 systemd[1]: Started session-16.scope. Feb 9 18:36:57.328006 systemd-logind[1364]: New session 16 of user core. Feb 9 18:36:57.691242 sshd[3906]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:57.693886 systemd[1]: sshd@13-10.200.20.37:22-10.200.12.6:34946.service: Deactivated successfully. Feb 9 18:36:57.694622 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:36:57.695162 systemd-logind[1364]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:36:57.695859 systemd-logind[1364]: Removed session 16. Feb 9 18:36:57.757000 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.12.6:53010.service. Feb 9 18:36:58.146278 sshd[3916]: Accepted publickey for core from 10.200.12.6 port 53010 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:58.147621 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:58.152128 systemd[1]: Started session-17.scope. Feb 9 18:36:58.152473 systemd-logind[1364]: New session 17 of user core. Feb 9 18:36:59.232843 sshd[3916]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:59.236032 systemd[1]: sshd@14-10.200.20.37:22-10.200.12.6:53010.service: Deactivated successfully. Feb 9 18:36:59.236798 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:36:59.237154 systemd-logind[1364]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:36:59.237968 systemd-logind[1364]: Removed session 17. Feb 9 18:36:59.297236 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.12.6:53026.service. Feb 9 18:36:59.680273 sshd[3933]: Accepted publickey for core from 10.200.12.6 port 53026 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:59.681549 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:59.685528 systemd-logind[1364]: New session 18 of user core. Feb 9 18:36:59.685934 systemd[1]: Started session-18.scope. Feb 9 18:37:00.189167 sshd[3933]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:00.191712 systemd-logind[1364]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:37:00.191955 systemd[1]: sshd@15-10.200.20.37:22-10.200.12.6:53026.service: Deactivated successfully. Feb 9 18:37:00.192682 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:37:00.193419 systemd-logind[1364]: Removed session 18. Feb 9 18:37:00.253267 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.12.6:53042.service. Feb 9 18:37:00.635911 sshd[3943]: Accepted publickey for core from 10.200.12.6 port 53042 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:00.637506 sshd[3943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:00.641132 systemd-logind[1364]: New session 19 of user core. Feb 9 18:37:00.641638 systemd[1]: Started session-19.scope. Feb 9 18:37:00.981386 sshd[3943]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:00.983728 systemd-logind[1364]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:37:00.983989 systemd[1]: sshd@16-10.200.20.37:22-10.200.12.6:53042.service: Deactivated successfully. Feb 9 18:37:00.984734 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:37:00.985610 systemd-logind[1364]: Removed session 19. Feb 9 18:37:06.050346 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.12.6:53058.service. Feb 9 18:37:06.464375 sshd[3959]: Accepted publickey for core from 10.200.12.6 port 53058 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:06.466051 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:06.470371 systemd[1]: Started session-20.scope. Feb 9 18:37:06.470708 systemd-logind[1364]: New session 20 of user core. Feb 9 18:37:06.824678 sshd[3959]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:06.827187 systemd[1]: sshd@17-10.200.20.37:22-10.200.12.6:53058.service: Deactivated successfully. Feb 9 18:37:06.827960 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:37:06.828575 systemd-logind[1364]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:37:06.829404 systemd-logind[1364]: Removed session 20. Feb 9 18:37:11.895828 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.12.6:58972.service. Feb 9 18:37:12.319021 sshd[3972]: Accepted publickey for core from 10.200.12.6 port 58972 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:12.320218 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:12.324698 systemd[1]: Started session-21.scope. Feb 9 18:37:12.325282 systemd-logind[1364]: New session 21 of user core. Feb 9 18:37:12.681133 sshd[3972]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:12.683896 systemd-logind[1364]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:37:12.684478 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:37:12.685014 systemd[1]: sshd@18-10.200.20.37:22-10.200.12.6:58972.service: Deactivated successfully. Feb 9 18:37:12.685851 systemd-logind[1364]: Removed session 21. Feb 9 18:37:17.750575 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.12.6:37916.service. Feb 9 18:37:18.138248 sshd[3986]: Accepted publickey for core from 10.200.12.6 port 37916 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:18.139863 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:18.144113 systemd[1]: Started session-22.scope. Feb 9 18:37:18.144692 systemd-logind[1364]: New session 22 of user core. Feb 9 18:37:18.476424 sshd[3986]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:18.479332 systemd[1]: sshd@19-10.200.20.37:22-10.200.12.6:37916.service: Deactivated successfully. Feb 9 18:37:18.480108 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:37:18.480718 systemd-logind[1364]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:37:18.481628 systemd-logind[1364]: Removed session 22. Feb 9 18:37:18.543058 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.12.6:37924.service. Feb 9 18:37:18.927661 sshd[3999]: Accepted publickey for core from 10.200.12.6 port 37924 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:18.929312 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:18.933805 systemd[1]: Started session-23.scope. Feb 9 18:37:18.934613 systemd-logind[1364]: New session 23 of user core. Feb 9 18:37:21.337722 systemd[1]: run-containerd-runc-k8s.io-88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6-runc.3kHxqz.mount: Deactivated successfully. Feb 9 18:37:21.350441 env[1379]: time="2024-02-09T18:37:21.350376821Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:37:21.355773 env[1379]: time="2024-02-09T18:37:21.355736208Z" level=info msg="StopContainer for \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\" with timeout 2 (s)" Feb 9 18:37:21.356143 env[1379]: time="2024-02-09T18:37:21.356107447Z" level=info msg="Stop container \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\" with signal terminated" Feb 9 18:37:21.362675 systemd-networkd[1526]: lxc_health: Link DOWN Feb 9 18:37:21.362683 systemd-networkd[1526]: lxc_health: Lost carrier Feb 9 18:37:21.386100 systemd[1]: cri-containerd-88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6.scope: Deactivated successfully. Feb 9 18:37:21.386386 systemd[1]: cri-containerd-88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6.scope: Consumed 6.437s CPU time. Feb 9 18:37:21.400302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6-rootfs.mount: Deactivated successfully. Feb 9 18:37:21.414869 env[1379]: time="2024-02-09T18:37:21.414831499Z" level=info msg="StopContainer for \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\" with timeout 30 (s)" Feb 9 18:37:21.416822 env[1379]: time="2024-02-09T18:37:21.416792334Z" level=info msg="Stop container \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\" with signal terminated" Feb 9 18:37:21.424250 systemd[1]: cri-containerd-d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b.scope: Deactivated successfully. Feb 9 18:37:21.442440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b-rootfs.mount: Deactivated successfully. Feb 9 18:37:21.463035 env[1379]: time="2024-02-09T18:37:21.462987498Z" level=info msg="shim disconnected" id=d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b Feb 9 18:37:21.463399 env[1379]: time="2024-02-09T18:37:21.463380457Z" level=warning msg="cleaning up after shim disconnected" id=d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b namespace=k8s.io Feb 9 18:37:21.463516 env[1379]: time="2024-02-09T18:37:21.463500977Z" level=info msg="cleaning up dead shim" Feb 9 18:37:21.463812 env[1379]: time="2024-02-09T18:37:21.463281417Z" level=info msg="shim disconnected" id=88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6 Feb 9 18:37:21.463916 env[1379]: time="2024-02-09T18:37:21.463899896Z" level=warning msg="cleaning up after shim disconnected" id=88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6 namespace=k8s.io Feb 9 18:37:21.463994 env[1379]: time="2024-02-09T18:37:21.463980656Z" level=info msg="cleaning up dead shim" Feb 9 18:37:21.474938 env[1379]: time="2024-02-09T18:37:21.474899868Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4068 runtime=io.containerd.runc.v2\n" Feb 9 18:37:21.476771 env[1379]: time="2024-02-09T18:37:21.476744224Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4069 runtime=io.containerd.runc.v2\n" Feb 9 18:37:21.482541 env[1379]: time="2024-02-09T18:37:21.482419969Z" level=info msg="StopContainer for \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\" returns successfully" Feb 9 18:37:21.483084 env[1379]: time="2024-02-09T18:37:21.483058008Z" level=info msg="StopPodSandbox for \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\"" Feb 9 18:37:21.483139 env[1379]: time="2024-02-09T18:37:21.483118488Z" level=info msg="Container to stop \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:21.483607 env[1379]: time="2024-02-09T18:37:21.483575286Z" level=info msg="StopContainer for \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\" returns successfully" Feb 9 18:37:21.483970 env[1379]: time="2024-02-09T18:37:21.483944046Z" level=info msg="StopPodSandbox for \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\"" Feb 9 18:37:21.484039 env[1379]: time="2024-02-09T18:37:21.483989725Z" level=info msg="Container to stop \"65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:21.484039 env[1379]: time="2024-02-09T18:37:21.484004365Z" level=info msg="Container to stop \"4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:21.484039 env[1379]: time="2024-02-09T18:37:21.484016485Z" level=info msg="Container to stop \"f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:21.484039 env[1379]: time="2024-02-09T18:37:21.484027765Z" level=info msg="Container to stop \"15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:21.484170 env[1379]: time="2024-02-09T18:37:21.484039085Z" level=info msg="Container to stop \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:21.489815 systemd[1]: cri-containerd-9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c.scope: Deactivated successfully. Feb 9 18:37:21.491713 systemd[1]: cri-containerd-62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3.scope: Deactivated successfully. Feb 9 18:37:21.536857 env[1379]: time="2024-02-09T18:37:21.536806673Z" level=info msg="shim disconnected" id=9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c Feb 9 18:37:21.536857 env[1379]: time="2024-02-09T18:37:21.536846952Z" level=warning msg="cleaning up after shim disconnected" id=9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c namespace=k8s.io Feb 9 18:37:21.536857 env[1379]: time="2024-02-09T18:37:21.536855872Z" level=info msg="cleaning up dead shim" Feb 9 18:37:21.538126 env[1379]: time="2024-02-09T18:37:21.537889070Z" level=info msg="shim disconnected" id=62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3 Feb 9 18:37:21.538126 env[1379]: time="2024-02-09T18:37:21.538123829Z" level=warning msg="cleaning up after shim disconnected" id=62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3 namespace=k8s.io Feb 9 18:37:21.538259 env[1379]: time="2024-02-09T18:37:21.538134309Z" level=info msg="cleaning up dead shim" Feb 9 18:37:21.545644 env[1379]: time="2024-02-09T18:37:21.545601530Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4133 runtime=io.containerd.runc.v2\n" Feb 9 18:37:21.545911 env[1379]: time="2024-02-09T18:37:21.545880290Z" level=info msg="TearDown network for sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" successfully" Feb 9 18:37:21.545946 env[1379]: time="2024-02-09T18:37:21.545908010Z" level=info msg="StopPodSandbox for \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" returns successfully" Feb 9 18:37:21.546895 env[1379]: time="2024-02-09T18:37:21.546714528Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4134 runtime=io.containerd.runc.v2\n" Feb 9 18:37:21.547591 env[1379]: time="2024-02-09T18:37:21.547568085Z" level=info msg="TearDown network for sandbox \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" successfully" Feb 9 18:37:21.547960 env[1379]: time="2024-02-09T18:37:21.547923165Z" level=info msg="StopPodSandbox for \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" returns successfully" Feb 9 18:37:21.658597 kubelet[2475]: I0209 18:37:21.658571 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29a28ca4-ce10-4fd8-9e67-6103f190f69a-cilium-config-path\") pod \"29a28ca4-ce10-4fd8-9e67-6103f190f69a\" (UID: \"29a28ca4-ce10-4fd8-9e67-6103f190f69a\") " Feb 9 18:37:21.659520 kubelet[2475]: I0209 18:37:21.659494 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-run\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659588 kubelet[2475]: I0209 18:37:21.659533 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c244e405-4459-48aa-a762-86ea96854b16-cilium-config-path\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659588 kubelet[2475]: I0209 18:37:21.659551 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-cgroup\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659641 kubelet[2475]: I0209 18:37:21.659590 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-hubble-tls\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659641 kubelet[2475]: I0209 18:37:21.659608 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-hostproc\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659641 kubelet[2475]: I0209 18:37:21.659624 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cni-path\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659641 kubelet[2475]: I0209 18:37:21.659641 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-etc-cni-netd\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659740 kubelet[2475]: I0209 18:37:21.659657 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-net\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.659740 kubelet[2475]: I0209 18:37:21.659677 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rj6x\" (UniqueName: \"kubernetes.io/projected/29a28ca4-ce10-4fd8-9e67-6103f190f69a-kube-api-access-2rj6x\") pod \"29a28ca4-ce10-4fd8-9e67-6103f190f69a\" (UID: \"29a28ca4-ce10-4fd8-9e67-6103f190f69a\") " Feb 9 18:37:21.661301 kubelet[2475]: I0209 18:37:21.661242 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29a28ca4-ce10-4fd8-9e67-6103f190f69a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29a28ca4-ce10-4fd8-9e67-6103f190f69a" (UID: "29a28ca4-ce10-4fd8-9e67-6103f190f69a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:37:21.662200 kubelet[2475]: I0209 18:37:21.662157 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a28ca4-ce10-4fd8-9e67-6103f190f69a-kube-api-access-2rj6x" (OuterVolumeSpecName: "kube-api-access-2rj6x") pod "29a28ca4-ce10-4fd8-9e67-6103f190f69a" (UID: "29a28ca4-ce10-4fd8-9e67-6103f190f69a"). InnerVolumeSpecName "kube-api-access-2rj6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:37:21.662277 kubelet[2475]: I0209 18:37:21.662209 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-hostproc" (OuterVolumeSpecName: "hostproc") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.662277 kubelet[2475]: I0209 18:37:21.662232 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cni-path" (OuterVolumeSpecName: "cni-path") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.662277 kubelet[2475]: I0209 18:37:21.662248 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.662277 kubelet[2475]: I0209 18:37:21.662263 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.663932 kubelet[2475]: I0209 18:37:21.663910 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:37:21.664050 kubelet[2475]: I0209 18:37:21.664036 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.664109 kubelet[2475]: I0209 18:37:21.664080 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c244e405-4459-48aa-a762-86ea96854b16-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:37:21.664176 kubelet[2475]: I0209 18:37:21.664101 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.760504 kubelet[2475]: I0209 18:37:21.760471 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-bpf-maps\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.760725 kubelet[2475]: I0209 18:37:21.760712 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-kernel\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.760822 kubelet[2475]: I0209 18:37:21.760811 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-xtables-lock\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.760888 kubelet[2475]: I0209 18:37:21.760865 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.760960 kubelet[2475]: I0209 18:37:21.760948 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c244e405-4459-48aa-a762-86ea96854b16-clustermesh-secrets\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.761034 kubelet[2475]: I0209 18:37:21.761025 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzv9x\" (UniqueName: \"kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-kube-api-access-mzv9x\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.761102 kubelet[2475]: I0209 18:37:21.761093 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-lib-modules\") pod \"c244e405-4459-48aa-a762-86ea96854b16\" (UID: \"c244e405-4459-48aa-a762-86ea96854b16\") " Feb 9 18:37:21.761199 kubelet[2475]: I0209 18:37:21.761189 2475 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-net\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761266 kubelet[2475]: I0209 18:37:21.761256 2475 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2rj6x\" (UniqueName: \"kubernetes.io/projected/29a28ca4-ce10-4fd8-9e67-6103f190f69a-kube-api-access-2rj6x\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761324 kubelet[2475]: I0209 18:37:21.761315 2475 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-hostproc\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761386 kubelet[2475]: I0209 18:37:21.761377 2475 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cni-path\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761482 kubelet[2475]: I0209 18:37:21.761468 2475 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-etc-cni-netd\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761557 kubelet[2475]: I0209 18:37:21.761548 2475 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-xtables-lock\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761617 kubelet[2475]: I0209 18:37:21.761608 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29a28ca4-ce10-4fd8-9e67-6103f190f69a-cilium-config-path\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761710 kubelet[2475]: I0209 18:37:21.761701 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-run\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761769 kubelet[2475]: I0209 18:37:21.761760 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c244e405-4459-48aa-a762-86ea96854b16-cilium-config-path\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761828 kubelet[2475]: I0209 18:37:21.761818 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-cilium-cgroup\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761885 kubelet[2475]: I0209 18:37:21.761876 2475 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-hubble-tls\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.761965 kubelet[2475]: I0209 18:37:21.761952 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.762026 kubelet[2475]: I0209 18:37:21.760551 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.762079 kubelet[2475]: I0209 18:37:21.760764 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:21.763523 kubelet[2475]: I0209 18:37:21.763471 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c244e405-4459-48aa-a762-86ea96854b16-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:37:21.764463 kubelet[2475]: I0209 18:37:21.764418 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-kube-api-access-mzv9x" (OuterVolumeSpecName: "kube-api-access-mzv9x") pod "c244e405-4459-48aa-a762-86ea96854b16" (UID: "c244e405-4459-48aa-a762-86ea96854b16"). InnerVolumeSpecName "kube-api-access-mzv9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:37:21.862736 kubelet[2475]: I0209 18:37:21.862705 2475 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-bpf-maps\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.862927 kubelet[2475]: I0209 18:37:21.862916 2475 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.862996 kubelet[2475]: I0209 18:37:21.862987 2475 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c244e405-4459-48aa-a762-86ea96854b16-clustermesh-secrets\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.863057 kubelet[2475]: I0209 18:37:21.863049 2475 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mzv9x\" (UniqueName: \"kubernetes.io/projected/c244e405-4459-48aa-a762-86ea96854b16-kube-api-access-mzv9x\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:21.863120 kubelet[2475]: I0209 18:37:21.863110 2475 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c244e405-4459-48aa-a762-86ea96854b16-lib-modules\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:22.332001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3-rootfs.mount: Deactivated successfully. Feb 9 18:37:22.332106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3-shm.mount: Deactivated successfully. Feb 9 18:37:22.332165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c-rootfs.mount: Deactivated successfully. Feb 9 18:37:22.332220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c-shm.mount: Deactivated successfully. Feb 9 18:37:22.332271 systemd[1]: var-lib-kubelet-pods-29a28ca4\x2dce10\x2d4fd8\x2d9e67\x2d6103f190f69a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rj6x.mount: Deactivated successfully. Feb 9 18:37:22.332327 systemd[1]: var-lib-kubelet-pods-c244e405\x2d4459\x2d48aa\x2da762\x2d86ea96854b16-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzv9x.mount: Deactivated successfully. Feb 9 18:37:22.332381 systemd[1]: var-lib-kubelet-pods-c244e405\x2d4459\x2d48aa\x2da762\x2d86ea96854b16-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:37:22.332448 systemd[1]: var-lib-kubelet-pods-c244e405\x2d4459\x2d48aa\x2da762\x2d86ea96854b16-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:37:22.526503 systemd[1]: Removed slice kubepods-besteffort-pod29a28ca4_ce10_4fd8_9e67_6103f190f69a.slice. Feb 9 18:37:22.527238 kubelet[2475]: I0209 18:37:22.527211 2475 scope.go:117] "RemoveContainer" containerID="d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b" Feb 9 18:37:22.530412 systemd[1]: Removed slice kubepods-burstable-podc244e405_4459_48aa_a762_86ea96854b16.slice. Feb 9 18:37:22.530495 systemd[1]: kubepods-burstable-podc244e405_4459_48aa_a762_86ea96854b16.slice: Consumed 6.526s CPU time. Feb 9 18:37:22.535199 env[1379]: time="2024-02-09T18:37:22.534910278Z" level=info msg="RemoveContainer for \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\"" Feb 9 18:37:22.543137 env[1379]: time="2024-02-09T18:37:22.542982418Z" level=info msg="RemoveContainer for \"d40b75fcd1ce0a3009a5c87c749bc00d21416393f1c99737d8a7f2ad9486980b\" returns successfully" Feb 9 18:37:22.543494 kubelet[2475]: I0209 18:37:22.543476 2475 scope.go:117] "RemoveContainer" containerID="88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6" Feb 9 18:37:22.544747 env[1379]: time="2024-02-09T18:37:22.544708814Z" level=info msg="RemoveContainer for \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\"" Feb 9 18:37:22.552379 env[1379]: time="2024-02-09T18:37:22.552351716Z" level=info msg="RemoveContainer for \"88b1ca04e89d8e512a8b728b0ae2cb77c97aa38a176c2624397c7101fc9314d6\" returns successfully" Feb 9 18:37:22.552760 kubelet[2475]: I0209 18:37:22.552742 2475 scope.go:117] "RemoveContainer" containerID="15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9" Feb 9 18:37:22.554573 env[1379]: time="2024-02-09T18:37:22.554549710Z" level=info msg="RemoveContainer for \"15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9\"" Feb 9 18:37:22.562530 env[1379]: time="2024-02-09T18:37:22.562502411Z" level=info msg="RemoveContainer for \"15c4ee7d0dec6e54359658e4fae63d6b02b7a63c3d1ff268f8373304e4e808c9\" returns successfully" Feb 9 18:37:22.562809 kubelet[2475]: I0209 18:37:22.562793 2475 scope.go:117] "RemoveContainer" containerID="4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29" Feb 9 18:37:22.563870 env[1379]: time="2024-02-09T18:37:22.563846407Z" level=info msg="RemoveContainer for \"4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29\"" Feb 9 18:37:22.573358 env[1379]: time="2024-02-09T18:37:22.573330864Z" level=info msg="RemoveContainer for \"4cd128d30c20a4645dab4e27a5d86490ec8128b2caaa57119621a74801e50a29\" returns successfully" Feb 9 18:37:22.573700 kubelet[2475]: I0209 18:37:22.573682 2475 scope.go:117] "RemoveContainer" containerID="f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace" Feb 9 18:37:22.574707 env[1379]: time="2024-02-09T18:37:22.574676821Z" level=info msg="RemoveContainer for \"f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace\"" Feb 9 18:37:22.585119 env[1379]: time="2024-02-09T18:37:22.584133358Z" level=info msg="RemoveContainer for \"f0c700fc99119f52d91c99b5b1c641f41fd54d09c956e1c3479d5da741e57ace\" returns successfully" Feb 9 18:37:22.585220 kubelet[2475]: I0209 18:37:22.584569 2475 scope.go:117] "RemoveContainer" containerID="65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3" Feb 9 18:37:22.585538 env[1379]: time="2024-02-09T18:37:22.585512594Z" level=info msg="RemoveContainer for \"65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3\"" Feb 9 18:37:22.595411 env[1379]: time="2024-02-09T18:37:22.595375810Z" level=info msg="RemoveContainer for \"65cbcdc5eabf26d55f5ae07fdb421d2ecb349e0dc9d8e259df3d2998bdc708f3\" returns successfully" Feb 9 18:37:23.040920 kubelet[2475]: I0209 18:37:23.040894 2475 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="29a28ca4-ce10-4fd8-9e67-6103f190f69a" path="/var/lib/kubelet/pods/29a28ca4-ce10-4fd8-9e67-6103f190f69a/volumes" Feb 9 18:37:23.041721 kubelet[2475]: I0209 18:37:23.041705 2475 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c244e405-4459-48aa-a762-86ea96854b16" path="/var/lib/kubelet/pods/c244e405-4459-48aa-a762-86ea96854b16/volumes" Feb 9 18:37:23.334741 sshd[3999]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:23.337733 systemd-logind[1364]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:37:23.338301 systemd[1]: sshd@20-10.200.20.37:22-10.200.12.6:37924.service: Deactivated successfully. Feb 9 18:37:23.339327 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:37:23.339635 systemd[1]: session-23.scope: Consumed 1.530s CPU time. Feb 9 18:37:23.340341 systemd-logind[1364]: Removed session 23. Feb 9 18:37:23.404563 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.12.6:37940.service. Feb 9 18:37:23.788190 sshd[4166]: Accepted publickey for core from 10.200.12.6 port 37940 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:23.789497 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:23.794800 systemd[1]: Started session-24.scope. Feb 9 18:37:23.795483 systemd-logind[1364]: New session 24 of user core. Feb 9 18:37:24.713867 kubelet[2475]: I0209 18:37:24.713830 2475 topology_manager.go:215] "Topology Admit Handler" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" podNamespace="kube-system" podName="cilium-bnnqb" Feb 9 18:37:24.714190 kubelet[2475]: E0209 18:37:24.713888 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c244e405-4459-48aa-a762-86ea96854b16" containerName="mount-cgroup" Feb 9 18:37:24.714190 kubelet[2475]: E0209 18:37:24.713898 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c244e405-4459-48aa-a762-86ea96854b16" containerName="clean-cilium-state" Feb 9 18:37:24.714190 kubelet[2475]: E0209 18:37:24.713906 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c244e405-4459-48aa-a762-86ea96854b16" containerName="apply-sysctl-overwrites" Feb 9 18:37:24.714190 kubelet[2475]: E0209 18:37:24.713913 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c244e405-4459-48aa-a762-86ea96854b16" containerName="mount-bpf-fs" Feb 9 18:37:24.714190 kubelet[2475]: E0209 18:37:24.713919 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29a28ca4-ce10-4fd8-9e67-6103f190f69a" containerName="cilium-operator" Feb 9 18:37:24.714190 kubelet[2475]: E0209 18:37:24.713926 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c244e405-4459-48aa-a762-86ea96854b16" containerName="cilium-agent" Feb 9 18:37:24.714190 kubelet[2475]: I0209 18:37:24.713944 2475 memory_manager.go:346] "RemoveStaleState removing state" podUID="c244e405-4459-48aa-a762-86ea96854b16" containerName="cilium-agent" Feb 9 18:37:24.714190 kubelet[2475]: I0209 18:37:24.713951 2475 memory_manager.go:346] "RemoveStaleState removing state" podUID="29a28ca4-ce10-4fd8-9e67-6103f190f69a" containerName="cilium-operator" Feb 9 18:37:24.718871 systemd[1]: Created slice kubepods-burstable-pod510c09d0_3c10_403d_97f1_c756ab1e24e6.slice. Feb 9 18:37:24.742575 sshd[4166]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:24.745392 systemd-logind[1364]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:37:24.746096 systemd[1]: sshd@21-10.200.20.37:22-10.200.12.6:37940.service: Deactivated successfully. Feb 9 18:37:24.746835 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:37:24.747996 systemd-logind[1364]: Removed session 24. Feb 9 18:37:24.779199 kubelet[2475]: I0209 18:37:24.779144 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-etc-cni-netd\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.779450 kubelet[2475]: I0209 18:37:24.779415 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-config-path\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.779590 kubelet[2475]: I0209 18:37:24.779579 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szzzh\" (UniqueName: \"kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-kube-api-access-szzzh\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.779717 kubelet[2475]: I0209 18:37:24.779707 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-hubble-tls\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.779874 kubelet[2475]: I0209 18:37:24.779861 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-lib-modules\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.779999 kubelet[2475]: I0209 18:37:24.779989 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-run\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780107 kubelet[2475]: I0209 18:37:24.780097 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-cgroup\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780220 kubelet[2475]: I0209 18:37:24.780211 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-clustermesh-secrets\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780327 kubelet[2475]: I0209 18:37:24.780318 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-ipsec-secrets\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780457 kubelet[2475]: I0209 18:37:24.780446 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-kernel\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780579 kubelet[2475]: I0209 18:37:24.780568 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-bpf-maps\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780688 kubelet[2475]: I0209 18:37:24.780679 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-hostproc\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780798 kubelet[2475]: I0209 18:37:24.780789 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cni-path\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.780908 kubelet[2475]: I0209 18:37:24.780899 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-net\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.781020 kubelet[2475]: I0209 18:37:24.781010 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-xtables-lock\") pod \"cilium-bnnqb\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " pod="kube-system/cilium-bnnqb" Feb 9 18:37:24.811945 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.12.6:37954.service. Feb 9 18:37:25.028619 env[1379]: time="2024-02-09T18:37:25.027507379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnnqb,Uid:510c09d0-3c10-403d-97f1-c756ab1e24e6,Namespace:kube-system,Attempt:0,}" Feb 9 18:37:25.062504 env[1379]: time="2024-02-09T18:37:25.062407461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:37:25.062504 env[1379]: time="2024-02-09T18:37:25.062478021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:37:25.062672 env[1379]: time="2024-02-09T18:37:25.062488340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:37:25.062876 env[1379]: time="2024-02-09T18:37:25.062833820Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303 pid=4192 runtime=io.containerd.runc.v2 Feb 9 18:37:25.073288 systemd[1]: Started cri-containerd-1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303.scope. Feb 9 18:37:25.096295 env[1379]: time="2024-02-09T18:37:25.096249704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bnnqb,Uid:510c09d0-3c10-403d-97f1-c756ab1e24e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\"" Feb 9 18:37:25.105458 env[1379]: time="2024-02-09T18:37:25.103556448Z" level=info msg="CreateContainer within sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:37:25.138047 env[1379]: time="2024-02-09T18:37:25.138006290Z" level=info msg="CreateContainer within sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\"" Feb 9 18:37:25.139703 env[1379]: time="2024-02-09T18:37:25.138668289Z" level=info msg="StartContainer for \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\"" Feb 9 18:37:25.155401 systemd[1]: Started cri-containerd-56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a.scope. Feb 9 18:37:25.164309 systemd[1]: cri-containerd-56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a.scope: Deactivated successfully. Feb 9 18:37:25.164646 systemd[1]: Stopped cri-containerd-56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a.scope. Feb 9 18:37:25.198189 env[1379]: time="2024-02-09T18:37:25.198138755Z" level=info msg="shim disconnected" id=56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a Feb 9 18:37:25.198409 env[1379]: time="2024-02-09T18:37:25.198391634Z" level=warning msg="cleaning up after shim disconnected" id=56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a namespace=k8s.io Feb 9 18:37:25.198507 env[1379]: time="2024-02-09T18:37:25.198492674Z" level=info msg="cleaning up dead shim" Feb 9 18:37:25.205456 env[1379]: time="2024-02-09T18:37:25.205390939Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4249 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:37:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:37:25.205922 env[1379]: time="2024-02-09T18:37:25.205827898Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 9 18:37:25.206549 env[1379]: time="2024-02-09T18:37:25.206482616Z" level=error msg="Failed to pipe stderr of container \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\"" error="reading from a closed fifo" Feb 9 18:37:25.206613 env[1379]: time="2024-02-09T18:37:25.206484096Z" level=error msg="Failed to pipe stdout of container \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\"" error="reading from a closed fifo" Feb 9 18:37:25.212438 env[1379]: time="2024-02-09T18:37:25.212319363Z" level=error msg="StartContainer for \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:37:25.213173 kubelet[2475]: E0209 18:37:25.212686 2475 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a" Feb 9 18:37:25.213173 kubelet[2475]: E0209 18:37:25.212864 2475 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:37:25.213173 kubelet[2475]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:37:25.213173 kubelet[2475]: rm /hostbin/cilium-mount Feb 9 18:37:25.213353 kubelet[2475]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-szzzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bnnqb_kube-system(510c09d0-3c10-403d-97f1-c756ab1e24e6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:37:25.213454 kubelet[2475]: E0209 18:37:25.212913 2475 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bnnqb" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" Feb 9 18:37:25.226445 sshd[4179]: Accepted publickey for core from 10.200.12.6 port 37954 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:25.227026 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:25.231367 systemd[1]: Started session-25.scope. Feb 9 18:37:25.231704 systemd-logind[1364]: New session 25 of user core. Feb 9 18:37:25.537647 env[1379]: time="2024-02-09T18:37:25.537603710Z" level=info msg="CreateContainer within sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 18:37:25.574805 env[1379]: time="2024-02-09T18:37:25.574760467Z" level=info msg="CreateContainer within sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\"" Feb 9 18:37:25.575692 env[1379]: time="2024-02-09T18:37:25.575649985Z" level=info msg="StartContainer for \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\"" Feb 9 18:37:25.591209 systemd[1]: Started cri-containerd-6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0.scope. Feb 9 18:37:25.595300 sshd[4179]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:25.600993 systemd[1]: sshd@22-10.200.20.37:22-10.200.12.6:37954.service: Deactivated successfully. Feb 9 18:37:25.601694 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:37:25.602300 systemd-logind[1364]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:37:25.603131 systemd-logind[1364]: Removed session 25. Feb 9 18:37:25.604419 systemd[1]: cri-containerd-6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0.scope: Deactivated successfully. Feb 9 18:37:25.628526 env[1379]: time="2024-02-09T18:37:25.628467586Z" level=info msg="shim disconnected" id=6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0 Feb 9 18:37:25.628526 env[1379]: time="2024-02-09T18:37:25.628521225Z" level=warning msg="cleaning up after shim disconnected" id=6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0 namespace=k8s.io Feb 9 18:37:25.628526 env[1379]: time="2024-02-09T18:37:25.628532705Z" level=info msg="cleaning up dead shim" Feb 9 18:37:25.635183 env[1379]: time="2024-02-09T18:37:25.635132211Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4294 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:37:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:37:25.635448 env[1379]: time="2024-02-09T18:37:25.635369530Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 9 18:37:25.635642 env[1379]: time="2024-02-09T18:37:25.635611689Z" level=error msg="Failed to pipe stdout of container \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\"" error="reading from a closed fifo" Feb 9 18:37:25.638664 env[1379]: time="2024-02-09T18:37:25.638612043Z" level=error msg="Failed to pipe stderr of container \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\"" error="reading from a closed fifo" Feb 9 18:37:25.642813 env[1379]: time="2024-02-09T18:37:25.642765193Z" level=error msg="StartContainer for \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:37:25.644878 kubelet[2475]: E0209 18:37:25.644849 2475 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0" Feb 9 18:37:25.645016 kubelet[2475]: E0209 18:37:25.644959 2475 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:37:25.645016 kubelet[2475]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:37:25.645016 kubelet[2475]: rm /hostbin/cilium-mount Feb 9 18:37:25.645100 kubelet[2475]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-szzzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bnnqb_kube-system(510c09d0-3c10-403d-97f1-c756ab1e24e6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:37:25.645100 kubelet[2475]: E0209 18:37:25.644995 2475 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bnnqb" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" Feb 9 18:37:25.664609 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.12.6:37962.service. Feb 9 18:37:26.078843 sshd[4307]: Accepted publickey for core from 10.200.12.6 port 37962 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:26.080469 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:26.084763 systemd-logind[1364]: New session 26 of user core. Feb 9 18:37:26.085238 systemd[1]: Started session-26.scope. Feb 9 18:37:26.208693 kubelet[2475]: E0209 18:37:26.208661 2475 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:37:26.537909 kubelet[2475]: I0209 18:37:26.537884 2475 scope.go:117] "RemoveContainer" containerID="56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a" Feb 9 18:37:26.538738 env[1379]: time="2024-02-09T18:37:26.538691129Z" level=info msg="StopPodSandbox for \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\"" Feb 9 18:37:26.541083 env[1379]: time="2024-02-09T18:37:26.538754089Z" level=info msg="Container to stop \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:26.541083 env[1379]: time="2024-02-09T18:37:26.538769489Z" level=info msg="Container to stop \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:37:26.540334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303-shm.mount: Deactivated successfully. Feb 9 18:37:26.542825 env[1379]: time="2024-02-09T18:37:26.542476441Z" level=info msg="RemoveContainer for \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\"" Feb 9 18:37:26.553454 env[1379]: time="2024-02-09T18:37:26.551549781Z" level=info msg="RemoveContainer for \"56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a\" returns successfully" Feb 9 18:37:26.561060 systemd[1]: cri-containerd-1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303.scope: Deactivated successfully. Feb 9 18:37:26.577210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303-rootfs.mount: Deactivated successfully. Feb 9 18:37:26.594491 env[1379]: time="2024-02-09T18:37:26.594419367Z" level=info msg="shim disconnected" id=1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303 Feb 9 18:37:26.594491 env[1379]: time="2024-02-09T18:37:26.594490927Z" level=warning msg="cleaning up after shim disconnected" id=1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303 namespace=k8s.io Feb 9 18:37:26.594744 env[1379]: time="2024-02-09T18:37:26.594500847Z" level=info msg="cleaning up dead shim" Feb 9 18:37:26.602363 env[1379]: time="2024-02-09T18:37:26.602317550Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4336 runtime=io.containerd.runc.v2\n" Feb 9 18:37:26.602645 env[1379]: time="2024-02-09T18:37:26.602611829Z" level=info msg="TearDown network for sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" successfully" Feb 9 18:37:26.602686 env[1379]: time="2024-02-09T18:37:26.602642989Z" level=info msg="StopPodSandbox for \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" returns successfully" Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693842 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szzzh\" (UniqueName: \"kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-kube-api-access-szzzh\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693882 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-hostproc\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693902 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cni-path\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693924 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-hubble-tls\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693943 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-run\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693961 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-kernel\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693978 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-bpf-maps\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.693994 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-lib-modules\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694013 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-clustermesh-secrets\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694035 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-config-path\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694052 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-xtables-lock\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694095 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-net\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694117 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-cgroup\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694139 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-etc-cni-netd\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694159 2475 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-ipsec-secrets\") pod \"510c09d0-3c10-403d-97f1-c756ab1e24e6\" (UID: \"510c09d0-3c10-403d-97f1-c756ab1e24e6\") " Feb 9 18:37:26.694488 kubelet[2475]: I0209 18:37:26.694484 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.694970 kubelet[2475]: I0209 18:37:26.694535 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.694970 kubelet[2475]: I0209 18:37:26.694555 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.694970 kubelet[2475]: I0209 18:37:26.694765 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.694970 kubelet[2475]: I0209 18:37:26.694787 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.694970 kubelet[2475]: I0209 18:37:26.694806 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.694970 kubelet[2475]: I0209 18:37:26.694836 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.697080 kubelet[2475]: I0209 18:37:26.695343 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.697080 kubelet[2475]: I0209 18:37:26.695378 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.697080 kubelet[2475]: I0209 18:37:26.695395 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:37:26.698739 kubelet[2475]: I0209 18:37:26.697447 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:37:26.700258 systemd[1]: var-lib-kubelet-pods-510c09d0\x2d3c10\x2d403d\x2d97f1\x2dc756ab1e24e6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:37:26.702200 systemd[1]: var-lib-kubelet-pods-510c09d0\x2d3c10\x2d403d\x2d97f1\x2dc756ab1e24e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:37:26.704640 kubelet[2475]: I0209 18:37:26.704616 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:37:26.704813 kubelet[2475]: I0209 18:37:26.704796 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-kube-api-access-szzzh" (OuterVolumeSpecName: "kube-api-access-szzzh") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "kube-api-access-szzzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:37:26.706407 kubelet[2475]: I0209 18:37:26.706385 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:37:26.706865 kubelet[2475]: I0209 18:37:26.706847 2475 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "510c09d0-3c10-403d-97f1-c756ab1e24e6" (UID: "510c09d0-3c10-403d-97f1-c756ab1e24e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:37:26.796485 kubelet[2475]: I0209 18:37:26.795217 2475 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-etc-cni-netd\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.796682 kubelet[2475]: I0209 18:37:26.796664 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.796777 kubelet[2475]: I0209 18:37:26.796768 2475 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-szzzh\" (UniqueName: \"kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-kube-api-access-szzzh\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.796863 kubelet[2475]: I0209 18:37:26.796854 2475 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-hostproc\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.796932 kubelet[2475]: I0209 18:37:26.796924 2475 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cni-path\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797035 kubelet[2475]: I0209 18:37:26.797021 2475 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/510c09d0-3c10-403d-97f1-c756ab1e24e6-hubble-tls\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797115 kubelet[2475]: I0209 18:37:26.797106 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-run\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797185 kubelet[2475]: I0209 18:37:26.797177 2475 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797254 kubelet[2475]: I0209 18:37:26.797246 2475 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-bpf-maps\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797327 kubelet[2475]: I0209 18:37:26.797318 2475 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-lib-modules\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797390 kubelet[2475]: I0209 18:37:26.797381 2475 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/510c09d0-3c10-403d-97f1-c756ab1e24e6-clustermesh-secrets\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797517 kubelet[2475]: I0209 18:37:26.797507 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-config-path\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797592 kubelet[2475]: I0209 18:37:26.797582 2475 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-xtables-lock\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797667 kubelet[2475]: I0209 18:37:26.797657 2475 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-host-proc-sys-net\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.797730 kubelet[2475]: I0209 18:37:26.797721 2475 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/510c09d0-3c10-403d-97f1-c756ab1e24e6-cilium-cgroup\") on node \"ci-3510.3.2-a-de7ead93d8\" DevicePath \"\"" Feb 9 18:37:26.890794 systemd[1]: var-lib-kubelet-pods-510c09d0\x2d3c10\x2d403d\x2d97f1\x2dc756ab1e24e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dszzzh.mount: Deactivated successfully. Feb 9 18:37:26.890884 systemd[1]: var-lib-kubelet-pods-510c09d0\x2d3c10\x2d403d\x2d97f1\x2dc756ab1e24e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:37:27.045015 systemd[1]: Removed slice kubepods-burstable-pod510c09d0_3c10_403d_97f1_c756ab1e24e6.slice. Feb 9 18:37:27.540867 kubelet[2475]: I0209 18:37:27.540831 2475 scope.go:117] "RemoveContainer" containerID="6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0" Feb 9 18:37:27.543086 env[1379]: time="2024-02-09T18:37:27.543053204Z" level=info msg="RemoveContainer for \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\"" Feb 9 18:37:27.551717 env[1379]: time="2024-02-09T18:37:27.551681466Z" level=info msg="RemoveContainer for \"6d209cc82e4a35d0c1b39eb7eb222f60319040e8aeeb3d27b61010004c9c59d0\" returns successfully" Feb 9 18:37:27.585633 kubelet[2475]: I0209 18:37:27.585590 2475 topology_manager.go:215] "Topology Admit Handler" podUID="727b3d30-9a7f-493c-a6fe-02de08f91ac2" podNamespace="kube-system" podName="cilium-2g77g" Feb 9 18:37:27.585788 kubelet[2475]: E0209 18:37:27.585651 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" containerName="mount-cgroup" Feb 9 18:37:27.585788 kubelet[2475]: E0209 18:37:27.585662 2475 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" containerName="mount-cgroup" Feb 9 18:37:27.585788 kubelet[2475]: I0209 18:37:27.585682 2475 memory_manager.go:346] "RemoveStaleState removing state" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" containerName="mount-cgroup" Feb 9 18:37:27.585788 kubelet[2475]: I0209 18:37:27.585689 2475 memory_manager.go:346] "RemoveStaleState removing state" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" containerName="mount-cgroup" Feb 9 18:37:27.590941 systemd[1]: Created slice kubepods-burstable-pod727b3d30_9a7f_493c_a6fe_02de08f91ac2.slice. Feb 9 18:37:27.603283 kubelet[2475]: I0209 18:37:27.603247 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/727b3d30-9a7f-493c-a6fe-02de08f91ac2-hubble-tls\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603543 kubelet[2475]: I0209 18:37:27.603531 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-etc-cni-netd\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603708 kubelet[2475]: I0209 18:37:27.603696 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/727b3d30-9a7f-493c-a6fe-02de08f91ac2-clustermesh-secrets\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603838 kubelet[2475]: I0209 18:37:27.603806 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/727b3d30-9a7f-493c-a6fe-02de08f91ac2-cilium-ipsec-secrets\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603888 kubelet[2475]: I0209 18:37:27.603861 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-lib-modules\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603888 kubelet[2475]: I0209 18:37:27.603884 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/727b3d30-9a7f-493c-a6fe-02de08f91ac2-cilium-config-path\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603945 kubelet[2475]: I0209 18:37:27.603903 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t97xm\" (UniqueName: \"kubernetes.io/projected/727b3d30-9a7f-493c-a6fe-02de08f91ac2-kube-api-access-t97xm\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603945 kubelet[2475]: I0209 18:37:27.603925 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-hostproc\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.603945 kubelet[2475]: I0209 18:37:27.603943 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-cni-path\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.604022 kubelet[2475]: I0209 18:37:27.603965 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-cilium-cgroup\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.604022 kubelet[2475]: I0209 18:37:27.603983 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-xtables-lock\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.604022 kubelet[2475]: I0209 18:37:27.604001 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-cilium-run\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.604090 kubelet[2475]: I0209 18:37:27.604026 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-host-proc-sys-net\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.604090 kubelet[2475]: I0209 18:37:27.604047 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-host-proc-sys-kernel\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.604090 kubelet[2475]: I0209 18:37:27.604064 2475 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/727b3d30-9a7f-493c-a6fe-02de08f91ac2-bpf-maps\") pod \"cilium-2g77g\" (UID: \"727b3d30-9a7f-493c-a6fe-02de08f91ac2\") " pod="kube-system/cilium-2g77g" Feb 9 18:37:27.894002 env[1379]: time="2024-02-09T18:37:27.893613138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2g77g,Uid:727b3d30-9a7f-493c-a6fe-02de08f91ac2,Namespace:kube-system,Attempt:0,}" Feb 9 18:37:27.932189 env[1379]: time="2024-02-09T18:37:27.932117176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:37:27.932189 env[1379]: time="2024-02-09T18:37:27.932154936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:37:27.932189 env[1379]: time="2024-02-09T18:37:27.932170376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:37:27.932597 env[1379]: time="2024-02-09T18:37:27.932553655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324 pid=4365 runtime=io.containerd.runc.v2 Feb 9 18:37:27.944435 systemd[1]: Started cri-containerd-f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324.scope. Feb 9 18:37:27.951452 systemd[1]: run-containerd-runc-k8s.io-f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324-runc.pO7FJg.mount: Deactivated successfully. Feb 9 18:37:27.970943 env[1379]: time="2024-02-09T18:37:27.970902614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2g77g,Uid:727b3d30-9a7f-493c-a6fe-02de08f91ac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\"" Feb 9 18:37:27.974466 env[1379]: time="2024-02-09T18:37:27.974402526Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:37:28.007233 env[1379]: time="2024-02-09T18:37:28.007177057Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96\"" Feb 9 18:37:28.009438 env[1379]: time="2024-02-09T18:37:28.009405572Z" level=info msg="StartContainer for \"3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96\"" Feb 9 18:37:28.025028 systemd[1]: Started cri-containerd-3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96.scope. Feb 9 18:37:28.054280 env[1379]: time="2024-02-09T18:37:28.054234680Z" level=info msg="StartContainer for \"3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96\" returns successfully" Feb 9 18:37:28.059879 systemd[1]: cri-containerd-3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96.scope: Deactivated successfully. Feb 9 18:37:28.109662 env[1379]: time="2024-02-09T18:37:28.109603045Z" level=info msg="shim disconnected" id=3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96 Feb 9 18:37:28.109662 env[1379]: time="2024-02-09T18:37:28.109659725Z" level=warning msg="cleaning up after shim disconnected" id=3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96 namespace=k8s.io Feb 9 18:37:28.109662 env[1379]: time="2024-02-09T18:37:28.109669325Z" level=info msg="cleaning up dead shim" Feb 9 18:37:28.117675 env[1379]: time="2024-02-09T18:37:28.117632109Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4447 runtime=io.containerd.runc.v2\n" Feb 9 18:37:28.305410 kubelet[2475]: W0209 18:37:28.305229 2475 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod510c09d0_3c10_403d_97f1_c756ab1e24e6.slice/cri-containerd-56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a.scope WatchSource:0}: container "56e5622cd6402a3ec817dc33221f38291ef72fd4dd5960a6f8ca8649b247dd7a" in namespace "k8s.io": not found Feb 9 18:37:28.546805 env[1379]: time="2024-02-09T18:37:28.546752582Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:37:28.578744 env[1379]: time="2024-02-09T18:37:28.578617116Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1\"" Feb 9 18:37:28.579263 env[1379]: time="2024-02-09T18:37:28.579230075Z" level=info msg="StartContainer for \"3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1\"" Feb 9 18:37:28.592822 systemd[1]: Started cri-containerd-3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1.scope. Feb 9 18:37:28.624990 env[1379]: time="2024-02-09T18:37:28.624940580Z" level=info msg="StartContainer for \"3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1\" returns successfully" Feb 9 18:37:28.628754 systemd[1]: cri-containerd-3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1.scope: Deactivated successfully. Feb 9 18:37:28.663778 env[1379]: time="2024-02-09T18:37:28.663732020Z" level=info msg="shim disconnected" id=3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1 Feb 9 18:37:28.664005 env[1379]: time="2024-02-09T18:37:28.663986260Z" level=warning msg="cleaning up after shim disconnected" id=3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1 namespace=k8s.io Feb 9 18:37:28.664065 env[1379]: time="2024-02-09T18:37:28.664052900Z" level=info msg="cleaning up dead shim" Feb 9 18:37:28.671490 env[1379]: time="2024-02-09T18:37:28.671457204Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4510 runtime=io.containerd.runc.v2\n" Feb 9 18:37:29.041813 kubelet[2475]: I0209 18:37:29.041770 2475 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="510c09d0-3c10-403d-97f1-c756ab1e24e6" path="/var/lib/kubelet/pods/510c09d0-3c10-403d-97f1-c756ab1e24e6/volumes" Feb 9 18:37:29.550143 env[1379]: time="2024-02-09T18:37:29.550100022Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:37:29.583159 env[1379]: time="2024-02-09T18:37:29.583111076Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98\"" Feb 9 18:37:29.583827 env[1379]: time="2024-02-09T18:37:29.583790674Z" level=info msg="StartContainer for \"585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98\"" Feb 9 18:37:29.601267 systemd[1]: Started cri-containerd-585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98.scope. Feb 9 18:37:29.634306 systemd[1]: cri-containerd-585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98.scope: Deactivated successfully. Feb 9 18:37:29.639537 env[1379]: time="2024-02-09T18:37:29.639490803Z" level=info msg="StartContainer for \"585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98\" returns successfully" Feb 9 18:37:29.672960 env[1379]: time="2024-02-09T18:37:29.672908616Z" level=info msg="shim disconnected" id=585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98 Feb 9 18:37:29.672960 env[1379]: time="2024-02-09T18:37:29.672956135Z" level=warning msg="cleaning up after shim disconnected" id=585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98 namespace=k8s.io Feb 9 18:37:29.672960 env[1379]: time="2024-02-09T18:37:29.672967495Z" level=info msg="cleaning up dead shim" Feb 9 18:37:29.679880 env[1379]: time="2024-02-09T18:37:29.679833642Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4568 runtime=io.containerd.runc.v2\n" Feb 9 18:37:29.927803 systemd[1]: run-containerd-runc-k8s.io-585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98-runc.1HR8sW.mount: Deactivated successfully. Feb 9 18:37:29.927903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98-rootfs.mount: Deactivated successfully. Feb 9 18:37:30.554205 env[1379]: time="2024-02-09T18:37:30.554152560Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:37:30.583187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534902179.mount: Deactivated successfully. Feb 9 18:37:30.601761 env[1379]: time="2024-02-09T18:37:30.601708708Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953\"" Feb 9 18:37:30.602512 env[1379]: time="2024-02-09T18:37:30.602489106Z" level=info msg="StartContainer for \"9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953\"" Feb 9 18:37:30.616166 systemd[1]: Started cri-containerd-9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953.scope. Feb 9 18:37:30.643593 systemd[1]: cri-containerd-9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953.scope: Deactivated successfully. Feb 9 18:37:30.645336 env[1379]: time="2024-02-09T18:37:30.645204183Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727b3d30_9a7f_493c_a6fe_02de08f91ac2.slice/cri-containerd-9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953.scope/memory.events\": no such file or directory" Feb 9 18:37:30.650004 env[1379]: time="2024-02-09T18:37:30.649963574Z" level=info msg="StartContainer for \"9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953\" returns successfully" Feb 9 18:37:30.677632 env[1379]: time="2024-02-09T18:37:30.677576680Z" level=info msg="shim disconnected" id=9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953 Feb 9 18:37:30.677632 env[1379]: time="2024-02-09T18:37:30.677623960Z" level=warning msg="cleaning up after shim disconnected" id=9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953 namespace=k8s.io Feb 9 18:37:30.677632 env[1379]: time="2024-02-09T18:37:30.677634160Z" level=info msg="cleaning up dead shim" Feb 9 18:37:30.685280 env[1379]: time="2024-02-09T18:37:30.685226665Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4622 runtime=io.containerd.runc.v2\n" Feb 9 18:37:31.210137 kubelet[2475]: E0209 18:37:31.210106 2475 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:37:31.417865 kubelet[2475]: W0209 18:37:31.417821 2475 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727b3d30_9a7f_493c_a6fe_02de08f91ac2.slice/cri-containerd-3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96.scope WatchSource:0}: task 3ef21c2a569b418e653ff96b5af087dfe57bb5583f3c3f44f0ecf2e888333e96 not found: not found Feb 9 18:37:31.558401 env[1379]: time="2024-02-09T18:37:31.558308838Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:37:31.637328 env[1379]: time="2024-02-09T18:37:31.637286049Z" level=info msg="CreateContainer within sandbox \"f4f5b2cae13d28b3935dedd3b484e6b950d1ad557577d7c6549586c88c2a5324\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6618a1a016dc226df6633b258fe86d5645b139b3d2c86a4c3246fc474396f72\"" Feb 9 18:37:31.638148 env[1379]: time="2024-02-09T18:37:31.638125007Z" level=info msg="StartContainer for \"c6618a1a016dc226df6633b258fe86d5645b139b3d2c86a4c3246fc474396f72\"" Feb 9 18:37:31.656455 systemd[1]: Started cri-containerd-c6618a1a016dc226df6633b258fe86d5645b139b3d2c86a4c3246fc474396f72.scope. Feb 9 18:37:31.689491 env[1379]: time="2024-02-09T18:37:31.689398390Z" level=info msg="StartContainer for \"c6618a1a016dc226df6633b258fe86d5645b139b3d2c86a4c3246fc474396f72\" returns successfully" Feb 9 18:37:32.052454 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:37:32.574038 kubelet[2475]: I0209 18:37:32.574002 2475 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2g77g" podStartSLOduration=5.573966353 podCreationTimestamp="2024-02-09 18:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:37:32.572743515 +0000 UTC m=+211.644771706" watchObservedRunningTime="2024-02-09 18:37:32.573966353 +0000 UTC m=+211.645994504" Feb 9 18:37:34.503872 systemd-networkd[1526]: lxc_health: Link UP Feb 9 18:37:34.514017 systemd-networkd[1526]: lxc_health: Gained carrier Feb 9 18:37:34.514708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:37:34.528454 kubelet[2475]: W0209 18:37:34.526584 2475 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727b3d30_9a7f_493c_a6fe_02de08f91ac2.slice/cri-containerd-3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1.scope WatchSource:0}: task 3e35ec66e877733c0cd4f548a6c792e6f26d93fa187006d42284c20ba48f11a1 not found: not found Feb 9 18:37:35.944079 kubelet[2475]: I0209 18:37:35.944048 2475 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-de7ead93d8" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T18:37:35Z","lastTransitionTime":"2024-02-09T18:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 18:37:36.276591 systemd-networkd[1526]: lxc_health: Gained IPv6LL Feb 9 18:37:36.849276 systemd[1]: run-containerd-runc-k8s.io-c6618a1a016dc226df6633b258fe86d5645b139b3d2c86a4c3246fc474396f72-runc.Mjq3II.mount: Deactivated successfully. Feb 9 18:37:37.634629 kubelet[2475]: W0209 18:37:37.634587 2475 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727b3d30_9a7f_493c_a6fe_02de08f91ac2.slice/cri-containerd-585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98.scope WatchSource:0}: task 585b2567923e9a134d3c3966c2a903903eb05167f8367e609f6395dc2c8ebf98 not found: not found Feb 9 18:37:40.745701 kubelet[2475]: W0209 18:37:40.745654 2475 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727b3d30_9a7f_493c_a6fe_02de08f91ac2.slice/cri-containerd-9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953.scope WatchSource:0}: task 9d4c336e6ac871a405bd72922548a8492ecc407d463e9cb7d45da1121e0b4953 not found: not found Feb 9 18:37:41.082775 systemd[1]: run-containerd-runc-k8s.io-c6618a1a016dc226df6633b258fe86d5645b139b3d2c86a4c3246fc474396f72-runc.XsKhIc.mount: Deactivated successfully. Feb 9 18:37:41.221485 sshd[4307]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:41.224312 systemd[1]: sshd@23-10.200.20.37:22-10.200.12.6:37962.service: Deactivated successfully. Feb 9 18:37:41.225048 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 18:37:41.225983 systemd-logind[1364]: Session 26 logged out. Waiting for processes to exit. Feb 9 18:37:41.226801 systemd-logind[1364]: Removed session 26. Feb 9 18:37:55.325948 systemd[1]: cri-containerd-333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44.scope: Deactivated successfully. Feb 9 18:37:55.326246 systemd[1]: cri-containerd-333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44.scope: Consumed 4.099s CPU time. Feb 9 18:37:55.356322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44-rootfs.mount: Deactivated successfully. Feb 9 18:37:55.392411 env[1379]: time="2024-02-09T18:37:55.392367004Z" level=info msg="shim disconnected" id=333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44 Feb 9 18:37:55.392851 env[1379]: time="2024-02-09T18:37:55.392829563Z" level=warning msg="cleaning up after shim disconnected" id=333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44 namespace=k8s.io Feb 9 18:37:55.392931 env[1379]: time="2024-02-09T18:37:55.392918323Z" level=info msg="cleaning up dead shim" Feb 9 18:37:55.399751 env[1379]: time="2024-02-09T18:37:55.399711878Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5304 runtime=io.containerd.runc.v2\n" Feb 9 18:37:55.600473 kubelet[2475]: I0209 18:37:55.600360 2475 scope.go:117] "RemoveContainer" containerID="333224d3ba6289e60f81fae99d1c4b768f84f6b7f1523de6df37337823dabb44" Feb 9 18:37:55.603119 env[1379]: time="2024-02-09T18:37:55.603086449Z" level=info msg="CreateContainer within sandbox \"e0b1b0b791e3d1a364f543ff2544f2e1277176b4e08839ec7f8899f4bdddf144\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 18:37:55.631322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540572695.mount: Deactivated successfully. Feb 9 18:37:55.636464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347300892.mount: Deactivated successfully. Feb 9 18:37:55.650201 env[1379]: time="2024-02-09T18:37:55.650163655Z" level=info msg="CreateContainer within sandbox \"e0b1b0b791e3d1a364f543ff2544f2e1277176b4e08839ec7f8899f4bdddf144\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"15aebaecbcb7b38238d30b565b3cfef367f4832b65402b5f294472230565c05a\"" Feb 9 18:37:55.650828 env[1379]: time="2024-02-09T18:37:55.650806694Z" level=info msg="StartContainer for \"15aebaecbcb7b38238d30b565b3cfef367f4832b65402b5f294472230565c05a\"" Feb 9 18:37:55.664331 systemd[1]: Started cri-containerd-15aebaecbcb7b38238d30b565b3cfef367f4832b65402b5f294472230565c05a.scope. Feb 9 18:37:55.702123 env[1379]: time="2024-02-09T18:37:55.702076017Z" level=info msg="StartContainer for \"15aebaecbcb7b38238d30b565b3cfef367f4832b65402b5f294472230565c05a\" returns successfully" Feb 9 18:37:55.735584 kubelet[2475]: E0209 18:37:55.735344 2475 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.37:44330->10.200.20.23:2379: read: connection timed out" Feb 9 18:37:55.738577 systemd[1]: cri-containerd-52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc.scope: Deactivated successfully. Feb 9 18:37:55.738871 systemd[1]: cri-containerd-52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc.scope: Consumed 3.450s CPU time. Feb 9 18:37:55.776434 env[1379]: time="2024-02-09T18:37:55.776372882Z" level=info msg="shim disconnected" id=52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc Feb 9 18:37:55.776612 env[1379]: time="2024-02-09T18:37:55.776417402Z" level=warning msg="cleaning up after shim disconnected" id=52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc namespace=k8s.io Feb 9 18:37:55.776612 env[1379]: time="2024-02-09T18:37:55.776496322Z" level=info msg="cleaning up dead shim" Feb 9 18:37:55.783214 env[1379]: time="2024-02-09T18:37:55.783171517Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:37:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5366 runtime=io.containerd.runc.v2\n" Feb 9 18:37:56.058353 kubelet[2475]: E0209 18:37:56.058298 2475 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:37:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:37:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:37:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-09T18:37:46Z\\\",\\\"lastTransitionTime\\\":\\\"2024-02-09T18:37:46Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-3510.3.2-a-de7ead93d8\": Patch \"https://10.200.20.37:6443/api/v1/nodes/ci-3510.3.2-a-de7ead93d8/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:37:56.309498 kubelet[2475]: E0209 18:37:56.309403 2475 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ci-3510.3.2-a-de7ead93d8\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.37:44246->10.200.20.23:2379: read: connection timed out" Feb 9 18:37:56.357102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc-rootfs.mount: Deactivated successfully. Feb 9 18:37:56.501565 kubelet[2475]: E0209 18:37:56.501458 2475 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-de7ead93d8.17b2458a70594e84", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"490", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-de7ead93d8", UID:"ci-3510.3.2-a-de7ead93d8", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeReady", Message:"Node ci-3510.3.2-a-de7ead93d8 status is now: NodeReady", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-de7ead93d8"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 29, 0, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 37, 46, 57398856, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-de7ead93d8"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.37:44144->10.200.20.23:2379: read: connection timed out' (will not retry!) Feb 9 18:37:56.603804 kubelet[2475]: I0209 18:37:56.603716 2475 scope.go:117] "RemoveContainer" containerID="52cf69269604e34387b50c44cb76d3cb915cd43f5f53f363d4dac06352a61ddc" Feb 9 18:37:56.605523 env[1379]: time="2024-02-09T18:37:56.605488659Z" level=info msg="CreateContainer within sandbox \"e349a267f5134f4fed3885ffb4d23fc595d468dc80eb8dfacf140461bb7a9ba1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 18:37:56.634687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408570752.mount: Deactivated successfully. Feb 9 18:37:56.639682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945091610.mount: Deactivated successfully. Feb 9 18:37:56.656398 env[1379]: time="2024-02-09T18:37:56.656360264Z" level=info msg="CreateContainer within sandbox \"e349a267f5134f4fed3885ffb4d23fc595d468dc80eb8dfacf140461bb7a9ba1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2c868ea95da46474c46a465a87bb45c57e360cf493112d7eefbda4d8b0d7db90\"" Feb 9 18:37:56.656960 env[1379]: time="2024-02-09T18:37:56.656937103Z" level=info msg="StartContainer for \"2c868ea95da46474c46a465a87bb45c57e360cf493112d7eefbda4d8b0d7db90\"" Feb 9 18:37:56.671111 systemd[1]: Started cri-containerd-2c868ea95da46474c46a465a87bb45c57e360cf493112d7eefbda4d8b0d7db90.scope. Feb 9 18:37:56.722209 env[1379]: time="2024-02-09T18:37:56.722160858Z" level=info msg="StartContainer for \"2c868ea95da46474c46a465a87bb45c57e360cf493112d7eefbda4d8b0d7db90\" returns successfully" Feb 9 18:38:01.082404 env[1379]: time="2024-02-09T18:38:01.082361552Z" level=info msg="StopPodSandbox for \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\"" Feb 9 18:38:01.082776 env[1379]: time="2024-02-09T18:38:01.082470112Z" level=info msg="TearDown network for sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" successfully" Feb 9 18:38:01.082776 env[1379]: time="2024-02-09T18:38:01.082510912Z" level=info msg="StopPodSandbox for \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" returns successfully" Feb 9 18:38:01.083238 env[1379]: time="2024-02-09T18:38:01.083204392Z" level=info msg="RemovePodSandbox for \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\"" Feb 9 18:38:01.083310 env[1379]: time="2024-02-09T18:38:01.083234952Z" level=info msg="Forcibly stopping sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\"" Feb 9 18:38:01.083310 env[1379]: time="2024-02-09T18:38:01.083292992Z" level=info msg="TearDown network for sandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" successfully" Feb 9 18:38:01.091884 env[1379]: time="2024-02-09T18:38:01.091848867Z" level=info msg="RemovePodSandbox \"1d5e6b94f6173f425106b8e6ca7bf4bfbd26b5e71474791c4251585dc0334303\" returns successfully" Feb 9 18:38:01.092248 env[1379]: time="2024-02-09T18:38:01.092226027Z" level=info msg="StopPodSandbox for \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\"" Feb 9 18:38:01.092423 env[1379]: time="2024-02-09T18:38:01.092391347Z" level=info msg="TearDown network for sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" successfully" Feb 9 18:38:01.092676 env[1379]: time="2024-02-09T18:38:01.092650787Z" level=info msg="StopPodSandbox for \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" returns successfully" Feb 9 18:38:01.093128 env[1379]: time="2024-02-09T18:38:01.093104827Z" level=info msg="RemovePodSandbox for \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\"" Feb 9 18:38:01.093282 env[1379]: time="2024-02-09T18:38:01.093237426Z" level=info msg="Forcibly stopping sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\"" Feb 9 18:38:01.093402 env[1379]: time="2024-02-09T18:38:01.093384706Z" level=info msg="TearDown network for sandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" successfully" Feb 9 18:38:01.101906 env[1379]: time="2024-02-09T18:38:01.101875302Z" level=info msg="RemovePodSandbox \"9b6b40ed58abfecfa6e29b0ad27f9161df98abfa6d9d63bc5935a6e50fbcfa8c\" returns successfully" Feb 9 18:38:01.102389 env[1379]: time="2024-02-09T18:38:01.102365142Z" level=info msg="StopPodSandbox for \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\"" Feb 9 18:38:01.102615 env[1379]: time="2024-02-09T18:38:01.102580582Z" level=info msg="TearDown network for sandbox \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" successfully" Feb 9 18:38:01.102696 env[1379]: time="2024-02-09T18:38:01.102678182Z" level=info msg="StopPodSandbox for \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" returns successfully" Feb 9 18:38:01.103041 env[1379]: time="2024-02-09T18:38:01.103020342Z" level=info msg="RemovePodSandbox for \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\"" Feb 9 18:38:01.103204 env[1379]: time="2024-02-09T18:38:01.103165581Z" level=info msg="Forcibly stopping sandbox \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\"" Feb 9 18:38:01.103330 env[1379]: time="2024-02-09T18:38:01.103310861Z" level=info msg="TearDown network for sandbox \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" successfully" Feb 9 18:38:01.113288 env[1379]: time="2024-02-09T18:38:01.113247816Z" level=info msg="RemovePodSandbox \"62d2e01873d2ef841f5dea619e6433adeb1f524c993f2222cc2303ec2ddae6e3\" returns successfully" Feb 9 18:38:05.735904 kubelet[2475]: E0209 18:38:05.735864 2475 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-de7ead93d8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:38:06.037421 kubelet[2475]: I0209 18:38:06.037313 2475 status_manager.go:853] "Failed to get status for pod" podUID="1e7d23714bba59664f2691dcc93e4dd1" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-de7ead93d8" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.37:44256->10.200.20.23:2379: read: connection timed out" Feb 9 18:38:06.310671 kubelet[2475]: E0209 18:38:06.310580 2475 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ci-3510.3.2-a-de7ead93d8\": Get \"https://10.200.20.37:6443/api/v1/nodes/ci-3510.3.2-a-de7ead93d8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:38:15.736105 kubelet[2475]: E0209 18:38:15.736066 2475 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-de7ead93d8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:38:16.174975 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.175286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.177504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.184898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.192469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.199732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.206971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.229239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.229501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.236569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.243922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.251422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.258532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.265781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.288271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.288551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.295700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.302905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.310133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.311594 kubelet[2475]: E0209 18:38:16.311558 2475 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ci-3510.3.2-a-de7ead93d8\": Get \"https://10.200.20.37:6443/api/v1/nodes/ci-3510.3.2-a-de7ead93d8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:38:16.317655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.324754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.347542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.347771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.355339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.362873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.370284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.378544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.387533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.420157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.420395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.420537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.428078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.435967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.444056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.452256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.475197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.475406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.482693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.490472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.497953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.505507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.512815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.527859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.528043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.535568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.543004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.550525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.557822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.565205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.587972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.588186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.588299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.595889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.603504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.611068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.618597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.634376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.634651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.641963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.649776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.657378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.664912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.672504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.687710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.687903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.695617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.703591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.711068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.718813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.726677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.742149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.742352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.757662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.757847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.765186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.772840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.780290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.796619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.796843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.804207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.811879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.819610 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.827088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.834945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.850786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.851032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.858348 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.866012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.873606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.882000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.889349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.898053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.913581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.913809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.921232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.928779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.936266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.944269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.959596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.959833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.967313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.974918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.982412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.990347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:16.998325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.013757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.013926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.028792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.028974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.036305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.044038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.051931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.067816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.068054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.075757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.083247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.091229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.106454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.106685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.129947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.130276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.130394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.145351 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.145598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.154079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.162496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.178756 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.178994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.186483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.194348 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.202709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.211493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.219508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.235401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.235675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.243685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.251648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.259227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.267076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.274781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.291348 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.291604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.298915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.307290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.315350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.323810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.331856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.349660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.349887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.358000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.366069 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.374939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.382345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.391454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.415046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.415282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.415388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.422487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.430145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.437928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.445905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.463114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.463300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.471156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.479285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.495316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.495540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.503587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.519881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.520067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.535576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.535763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.543602 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.551625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.559697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.576056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.576272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.583913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.592509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.600585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.608805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.616870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.644164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.644471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.644592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.650898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.659146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.667298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.675240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.683579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.699930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.700111 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.707886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.715887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.724063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.732294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.749443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.749668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.757745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.766145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.774180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.782333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.790384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.807489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.807722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.815986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.824119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.832311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.840268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.848982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.873824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.874037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.874143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.882018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.890802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.898837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.906894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.934891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.935142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.935255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.943715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.952020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.960583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.968770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.985546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.985763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:17.994627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.003038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.011177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.019567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.027704 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.054020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.054263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.054448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.064132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.072098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.087990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.088250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.096532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.113257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.113472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.121194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.128893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.136792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.145209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.170601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.170803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.170920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.178585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.186155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.194311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.202098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.217863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.218035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.226031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.233832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.242171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.250293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.258497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.274052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.274222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.281821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.289795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.297443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.304915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.312565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.328704 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.328896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.336787 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.355274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.355554 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.362999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.370924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.387932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.388168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.396871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.405106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.414226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.420953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.428744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.452379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.452608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.452726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.460300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.468031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.477064 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.484846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.501630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.501815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.509846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.517669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.525522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.532994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.546445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.565284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.565546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.565667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.573154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.580853 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.589098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.596683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.613071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.613279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.621612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.629387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.637517 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.646038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.653861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.669899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.670097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.677861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.686161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.694331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.702608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.710256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.725954 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.726155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.733627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.742198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.750456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.759157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.766891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.783333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.783534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.791451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.799359 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.807398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.815061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.823207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.839539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.839791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.847931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.856083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.864114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.872180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.880090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.896894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.897093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.905284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.913073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.920828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.928963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.936593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.960841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.961080 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.961189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.968640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.976459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.984292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:18.991838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.007379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.007593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.015410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.022840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.030683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.038455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.046810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.062991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.063216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.071021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.078903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.086640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.095896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.105011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.122034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.122252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.129474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.137393 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.145333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.160586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.161781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.177091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.177301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.184791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.192886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.200599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.208340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.216201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.231490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.231692 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.239288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.247103 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.254855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.262621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.270297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.285617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.285803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.293063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.301374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.309049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.316617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.324261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.339913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.340156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.348242 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.356170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.363587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.371065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.378512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.393962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.394186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.401699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.410161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.418131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.425764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.433269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.457669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.457927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.458036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.465708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.473785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.481624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.491042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.507812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.508052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.515833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.523596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.531590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.539329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.547483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.563945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.564170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.571528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.579394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.587273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.595545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.603556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.619918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.620117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.628199 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.636414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.644931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.652958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.661582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.686330 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.686557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.686665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.694574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.702767 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.711013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.719002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.734469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.734706 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.742655 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.750869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.758799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.766711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.774606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.791357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.791565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.807472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.807747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.815336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.823173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.830924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#131 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.847475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.847683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#134 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.855553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.863665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#135 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.871649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:19.881794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001