Jun 25 14:53:57.148029 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 14:53:57.148047 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:53:57.148055 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 25 14:53:57.148062 kernel: printk: bootconsole [pl11] enabled Jun 25 14:53:57.148067 kernel: efi: EFI v2.70 by EDK II Jun 25 14:53:57.148072 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e94ae18 Jun 25 14:53:57.148078 kernel: random: crng init done Jun 25 14:53:57.148083 kernel: ACPI: Early table checksum verification disabled Jun 25 14:53:57.148088 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jun 25 14:53:57.148094 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148099 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148106 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 14:53:57.148111 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148116 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148123 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148128 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148134 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148141 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148147 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 25 14:53:57.148152 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 14:53:57.148158 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 25 14:53:57.148164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 25 14:53:57.148169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 25 14:53:57.148175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 25 14:53:57.148180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 25 14:53:57.148209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 25 14:53:57.148215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 25 14:53:57.148223 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 25 14:53:57.148228 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 25 14:53:57.148234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 25 14:53:57.148240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 25 14:53:57.148245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 25 14:53:57.148251 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 25 14:53:57.148256 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jun 25 14:53:57.148262 kernel: Zone ranges: Jun 25 14:53:57.148268 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 25 14:53:57.148273 kernel: DMA32 empty Jun 25 14:53:57.148279 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 14:53:57.148284 kernel: Movable zone start for each node Jun 25 14:53:57.148291 kernel: Early memory node ranges Jun 25 14:53:57.148300 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 25 14:53:57.148306 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jun 25 14:53:57.148312 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jun 25 14:53:57.148318 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jun 25 14:53:57.148325 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jun 25 14:53:57.148331 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jun 25 14:53:57.148337 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jun 25 14:53:57.148343 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jun 25 14:53:57.148348 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 14:53:57.148354 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 25 14:53:57.148361 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 25 14:53:57.148367 kernel: psci: probing for conduit method from ACPI. Jun 25 14:53:57.148373 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 14:53:57.148379 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:53:57.148384 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 25 14:53:57.148390 kernel: psci: SMC Calling Convention v1.4 Jun 25 14:53:57.148397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 25 14:53:57.148403 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 25 14:53:57.148409 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:53:57.148416 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:53:57.148422 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 14:53:57.148428 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:53:57.148433 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:53:57.148439 kernel: CPU features: detected: Hardware dirty bit management Jun 25 14:53:57.148445 kernel: CPU features: detected: Spectre-BHB Jun 25 14:53:57.148451 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:53:57.148457 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:53:57.148464 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 14:53:57.148470 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 25 14:53:57.148476 kernel: alternatives: applying boot alternatives Jun 25 14:53:57.148482 kernel: Fallback order for Node 0: 0 Jun 25 14:53:57.148488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 25 14:53:57.148494 kernel: Policy zone: Normal Jun 25 14:53:57.148502 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:53:57.148508 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:53:57.148514 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:53:57.148520 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:53:57.148526 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:53:57.148533 kernel: software IO TLB: area num 2. Jun 25 14:53:57.148539 kernel: software IO TLB: mapped [mem 0x000000003a94a000-0x000000003e94a000] (64MB) Jun 25 14:53:57.148545 kernel: Memory: 3991396K/4194160K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 202764K reserved, 0K cma-reserved) Jun 25 14:53:57.148551 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 14:53:57.148557 kernel: trace event string verifier disabled Jun 25 14:53:57.148563 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:53:57.148570 kernel: rcu: RCU event tracing is enabled. Jun 25 14:53:57.148576 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 14:53:57.148582 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:53:57.148588 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:53:57.148593 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:53:57.148601 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 14:53:57.148607 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:53:57.148613 kernel: GICv3: 960 SPIs implemented Jun 25 14:53:57.148619 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:53:57.148625 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:53:57.148631 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 14:53:57.148637 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 25 14:53:57.148642 kernel: ITS: No ITS available, not enabling LPIs Jun 25 14:53:57.148649 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:53:57.148654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:53:57.148660 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 14:53:57.148667 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 14:53:57.148674 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 14:53:57.148680 kernel: Console: colour dummy device 80x25 Jun 25 14:53:57.148686 kernel: printk: console [tty1] enabled Jun 25 14:53:57.148693 kernel: ACPI: Core revision 20220331 Jun 25 14:53:57.148699 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 14:53:57.148705 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:53:57.148711 kernel: LSM: Security Framework initializing Jun 25 14:53:57.148717 kernel: SELinux: Initializing. Jun 25 14:53:57.148723 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:53:57.148731 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:53:57.148737 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:53:57.148743 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:53:57.148749 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:53:57.148755 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:53:57.148761 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 25 14:53:57.148767 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jun 25 14:53:57.148773 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 14:53:57.148785 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:53:57.148791 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:53:57.148798 kernel: Remapping and enabling EFI services. Jun 25 14:53:57.148804 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:53:57.148811 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:53:57.148818 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 25 14:53:57.148824 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 14:53:57.148831 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 14:53:57.148837 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 14:53:57.148845 kernel: SMP: Total of 2 processors activated. Jun 25 14:53:57.148851 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:53:57.148858 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 25 14:53:57.148864 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 14:53:57.148871 kernel: CPU features: detected: CRC32 instructions Jun 25 14:53:57.148877 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 14:53:57.148884 kernel: CPU features: detected: LSE atomic instructions Jun 25 14:53:57.148890 kernel: CPU features: detected: Privileged Access Never Jun 25 14:53:57.148896 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:53:57.148904 kernel: alternatives: applying system-wide alternatives Jun 25 14:53:57.148910 kernel: devtmpfs: initialized Jun 25 14:53:57.148917 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:53:57.148923 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 14:53:57.148930 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:53:57.148936 kernel: SMBIOS 3.1.0 present. Jun 25 14:53:57.148942 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jun 25 14:53:57.148949 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:53:57.148955 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:53:57.148963 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:53:57.148970 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:53:57.148976 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:53:57.148983 kernel: audit: type=2000 audit(0.048:1): state=initialized audit_enabled=0 res=1 Jun 25 14:53:57.148989 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:53:57.148995 kernel: cpuidle: using governor menu Jun 25 14:53:57.149002 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:53:57.149008 kernel: ASID allocator initialised with 32768 entries Jun 25 14:53:57.149015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:53:57.149022 kernel: Serial: AMBA PL011 UART driver Jun 25 14:53:57.149029 kernel: KASLR enabled Jun 25 14:53:57.149035 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:53:57.149042 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:53:57.149048 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:53:57.149054 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:53:57.149061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:53:57.149067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:53:57.149074 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:53:57.149081 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:53:57.149088 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:53:57.149094 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:53:57.149100 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:53:57.149107 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:53:57.149113 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:53:57.149120 kernel: ACPI: Interpreter enabled Jun 25 14:53:57.149126 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:53:57.149133 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 25 14:53:57.149141 kernel: printk: console [ttyAMA0] enabled Jun 25 14:53:57.149148 kernel: printk: bootconsole [pl11] disabled Jun 25 14:53:57.149154 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 25 14:53:57.149161 kernel: iommu: Default domain type: Translated Jun 25 14:53:57.149167 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:53:57.149173 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:53:57.149180 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:53:57.149257 kernel: PTP clock support registered Jun 25 14:53:57.149265 kernel: Registered efivars operations Jun 25 14:53:57.149274 kernel: No ACPI PMU IRQ for CPU0 Jun 25 14:53:57.149280 kernel: No ACPI PMU IRQ for CPU1 Jun 25 14:53:57.149286 kernel: vgaarb: loaded Jun 25 14:53:57.149293 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:53:57.149299 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:53:57.149306 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:53:57.149312 kernel: pnp: PnP ACPI init Jun 25 14:53:57.149319 kernel: pnp: PnP ACPI: found 0 devices Jun 25 14:53:57.149325 kernel: NET: Registered PF_INET protocol family Jun 25 14:53:57.149333 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:53:57.149340 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:53:57.149346 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:53:57.149353 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:53:57.149359 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:53:57.149366 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:53:57.149372 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:53:57.149379 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:53:57.149385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:53:57.149393 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:53:57.149399 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 25 14:53:57.149406 kernel: kvm [1]: HYP mode not available Jun 25 14:53:57.149412 kernel: Initialise system trusted keyrings Jun 25 14:53:57.149419 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:53:57.149425 kernel: Key type asymmetric registered Jun 25 14:53:57.149431 kernel: Asymmetric key parser 'x509' registered Jun 25 14:53:57.149437 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:53:57.149444 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:53:57.149452 kernel: io scheduler mq-deadline registered Jun 25 14:53:57.149459 kernel: io scheduler kyber registered Jun 25 14:53:57.149465 kernel: io scheduler bfq registered Jun 25 14:53:57.149472 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:53:57.149478 kernel: thunder_xcv, ver 1.0 Jun 25 14:53:57.149484 kernel: thunder_bgx, ver 1.0 Jun 25 14:53:57.149491 kernel: nicpf, ver 1.0 Jun 25 14:53:57.149497 kernel: nicvf, ver 1.0 Jun 25 14:53:57.149613 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:53:57.149675 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:53:56 UTC (1719327236) Jun 25 14:53:57.149685 kernel: efifb: probing for efifb Jun 25 14:53:57.149691 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 14:53:57.149698 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 14:53:57.149704 kernel: efifb: scrolling: redraw Jun 25 14:53:57.149711 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 14:53:57.149717 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 14:53:57.149724 kernel: fb0: EFI VGA frame buffer device Jun 25 14:53:57.149732 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 25 14:53:57.149739 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:53:57.149745 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:53:57.149751 kernel: Segment Routing with IPv6 Jun 25 14:53:57.149758 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:53:57.149764 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:53:57.149771 kernel: Key type dns_resolver registered Jun 25 14:53:57.149777 kernel: registered taskstats version 1 Jun 25 14:53:57.149783 kernel: Loading compiled-in X.509 certificates Jun 25 14:53:57.149791 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:53:57.149797 kernel: Key type .fscrypt registered Jun 25 14:53:57.149804 kernel: Key type fscrypt-provisioning registered Jun 25 14:53:57.149810 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:53:57.149817 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:53:57.149823 kernel: ima: No architecture policies found Jun 25 14:53:57.149829 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:53:57.149836 kernel: clk: Disabling unused clocks Jun 25 14:53:57.149842 kernel: Freeing unused kernel memory: 34688K Jun 25 14:53:57.149850 kernel: Run /init as init process Jun 25 14:53:57.149856 kernel: with arguments: Jun 25 14:53:57.149863 kernel: /init Jun 25 14:53:57.149869 kernel: with environment: Jun 25 14:53:57.149875 kernel: HOME=/ Jun 25 14:53:57.149882 kernel: TERM=linux Jun 25 14:53:57.149888 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:53:57.149896 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:53:57.149906 systemd[1]: Detected virtualization microsoft. Jun 25 14:53:57.149913 systemd[1]: Detected architecture arm64. Jun 25 14:53:57.149920 systemd[1]: Running in initrd. Jun 25 14:53:57.149926 systemd[1]: No hostname configured, using default hostname. Jun 25 14:53:57.149933 systemd[1]: Hostname set to . Jun 25 14:53:57.149940 systemd[1]: Initializing machine ID from random generator. Jun 25 14:53:57.149947 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:53:57.149954 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:53:57.149962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:53:57.149969 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:53:57.149976 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:53:57.149983 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:53:57.149990 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:53:57.149997 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:53:57.150004 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:53:57.150012 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:53:57.150019 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:53:57.150027 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:53:57.150033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:53:57.150040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:53:57.150047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:53:57.150054 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:53:57.150061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:53:57.150068 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:53:57.150076 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:53:57.150083 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:53:57.150090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:53:57.150097 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:53:57.150107 systemd-journald[208]: Journal started Jun 25 14:53:57.150145 systemd-journald[208]: Runtime Journal (/run/log/journal/977db829c71845e9be907d76965ce6f7) is 8.0M, max 78.6M, 70.6M free. Jun 25 14:53:57.143872 systemd-modules-load[209]: Inserted module 'overlay' Jun 25 14:53:57.181892 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:53:57.181933 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:53:57.202984 kernel: Bridge firewalling registered Jun 25 14:53:57.203031 kernel: audit: type=1130 audit(1719327237.186:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.187404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:53:57.230449 kernel: audit: type=1130 audit(1719327237.208:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.187429 systemd-modules-load[209]: Inserted module 'br_netfilter' Jun 25 14:53:57.257480 kernel: audit: type=1130 audit(1719327237.234:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.257499 kernel: SCSI subsystem initialized Jun 25 14:53:57.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.209044 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:53:57.301349 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:53:57.301371 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:53:57.301380 kernel: audit: type=1130 audit(1719327237.278:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.234910 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:53:57.332153 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:53:57.308174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:53:57.319736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:53:57.331404 systemd-modules-load[209]: Inserted module 'dm_multipath' Jun 25 14:53:57.340334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:53:57.353219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:53:57.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.392172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:53:57.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.398505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:53:57.451844 kernel: audit: type=1130 audit(1719327237.370:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.451870 kernel: audit: type=1130 audit(1719327237.398:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.451879 kernel: audit: type=1130 audit(1719327237.427:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.427799 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:53:57.482231 kernel: audit: type=1130 audit(1719327237.457:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.483870 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:53:57.500703 kernel: audit: type=1334 audit(1719327237.495:10): prog-id=6 op=LOAD Jun 25 14:53:57.495000 audit: BPF prog-id=6 op=LOAD Jun 25 14:53:57.501509 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:53:57.519479 dracut-cmdline[229]: dracut-dracut-053 Jun 25 14:53:57.531360 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:53:57.521344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:53:57.577333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:53:57.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.580560 systemd-resolved[235]: Positive Trust Anchors: Jun 25 14:53:57.580566 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:53:57.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.580592 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:53:57.582767 systemd-resolved[235]: Defaulting to hostname 'linux'. Jun 25 14:53:57.585775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:53:57.601744 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:53:57.706207 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:53:57.717203 kernel: iscsi: registered transport (tcp) Jun 25 14:53:57.734971 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:53:57.734987 kernel: QLogic iSCSI HBA Driver Jun 25 14:53:57.767164 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:53:57.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:57.779303 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:53:57.834205 kernel: raid6: neonx8 gen() 15776 MB/s Jun 25 14:53:57.854195 kernel: raid6: neonx4 gen() 15663 MB/s Jun 25 14:53:57.874203 kernel: raid6: neonx2 gen() 13218 MB/s Jun 25 14:53:57.895200 kernel: raid6: neonx1 gen() 10488 MB/s Jun 25 14:53:57.915194 kernel: raid6: int64x8 gen() 6978 MB/s Jun 25 14:53:57.935195 kernel: raid6: int64x4 gen() 7341 MB/s Jun 25 14:53:57.956195 kernel: raid6: int64x2 gen() 6133 MB/s Jun 25 14:53:57.979579 kernel: raid6: int64x1 gen() 5058 MB/s Jun 25 14:53:57.979588 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Jun 25 14:53:58.004923 kernel: raid6: .... xor() 11891 MB/s, rmw enabled Jun 25 14:53:58.004937 kernel: raid6: using neon recovery algorithm Jun 25 14:53:58.013197 kernel: xor: measuring software checksum speed Jun 25 14:53:58.021270 kernel: 8regs : 19873 MB/sec Jun 25 14:53:58.021283 kernel: 32regs : 19649 MB/sec Jun 25 14:53:58.025514 kernel: arm64_neon : 27027 MB/sec Jun 25 14:53:58.030059 kernel: xor: using function: arm64_neon (27027 MB/sec) Jun 25 14:53:58.087207 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:53:58.096351 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:53:58.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:58.107000 audit: BPF prog-id=7 op=LOAD Jun 25 14:53:58.107000 audit: BPF prog-id=8 op=LOAD Jun 25 14:53:58.112386 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:53:58.142686 systemd-udevd[409]: Using default interface naming scheme 'v252'. Jun 25 14:53:58.150517 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:53:58.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:58.170389 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:53:58.192266 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Jun 25 14:53:58.220922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:53:58.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:58.237607 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:53:58.270873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:53:58.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:53:58.324217 kernel: hv_vmbus: Vmbus version:5.3 Jun 25 14:53:58.338382 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 14:53:58.338432 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 14:53:58.344657 kernel: scsi host0: storvsc_host_t Jun 25 14:53:58.344829 kernel: scsi host1: storvsc_host_t Jun 25 14:53:58.344852 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 14:53:58.359563 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 14:53:58.359606 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 14:53:58.359640 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jun 25 14:53:58.377422 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jun 25 14:53:58.377478 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 14:53:58.396224 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 14:53:58.414680 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 14:53:58.416508 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 14:53:58.416520 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 14:53:58.429324 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 14:53:58.456318 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 14:53:58.456423 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 14:53:58.456506 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 14:53:58.456584 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 14:53:58.456661 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:53:58.456671 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 14:53:58.464203 kernel: hv_netvsc 002248b8-754d-0022-48b8-754d002248b8 eth0: VF slot 1 added Jun 25 14:53:58.471210 kernel: hv_vmbus: registering driver hv_pci Jun 25 14:53:58.480431 kernel: hv_pci 84f4085a-4717-405b-90ad-e1cc18bc3ff8: PCI VMBus probing: Using version 0x10004 Jun 25 14:53:58.563603 kernel: hv_pci 84f4085a-4717-405b-90ad-e1cc18bc3ff8: PCI host bridge to bus 4717:00 Jun 25 14:53:58.563713 kernel: pci_bus 4717:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 25 14:53:58.563804 kernel: pci_bus 4717:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 14:53:58.563877 kernel: pci 4717:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 25 14:53:58.563984 kernel: pci 4717:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 14:53:58.564086 kernel: pci 4717:00:02.0: enabling Extended Tags Jun 25 14:53:58.564173 kernel: pci 4717:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4717:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 25 14:53:58.564281 kernel: pci_bus 4717:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 14:53:58.564371 kernel: pci 4717:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 14:53:58.599127 kernel: mlx5_core 4717:00:02.0: enabling device (0000 -> 0002) Jun 25 14:53:58.821575 kernel: mlx5_core 4717:00:02.0: firmware version: 16.30.1284 Jun 25 14:53:58.821697 kernel: mlx5_core 4717:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Jun 25 14:53:58.821780 kernel: hv_netvsc 002248b8-754d-0022-48b8-754d002248b8 eth0: VF registering: eth1 Jun 25 14:53:58.821866 kernel: mlx5_core 4717:00:02.0 eth1: joined to eth0 Jun 25 14:53:58.835204 kernel: mlx5_core 4717:00:02.0 enP18199s1: renamed from eth1 Jun 25 14:53:59.307776 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 14:53:59.380927 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (467) Jun 25 14:53:59.390879 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 14:53:59.670241 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 14:53:59.726204 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (475) Jun 25 14:53:59.738258 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 14:53:59.744355 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 14:53:59.770638 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:53:59.791248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:53:59.800209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:54:00.808772 disk-uuid[549]: The operation has completed successfully. Jun 25 14:54:00.813927 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 14:54:00.864869 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:54:00.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:00.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:00.864978 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:54:00.884619 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:54:00.897240 sh[661]: Success Jun 25 14:54:00.943214 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:54:01.308550 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:54:01.315627 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:54:01.340378 kernel: kauditd_printk_skb: 11 callbacks suppressed Jun 25 14:54:01.340404 kernel: audit: type=1130 audit(1719327241.331:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:01.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:01.325519 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:54:01.371206 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:54:01.371230 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:54:01.385523 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:54:01.391147 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:54:01.395737 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:54:01.980115 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:54:01.985200 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:54:02.001579 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:54:02.007271 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:54:02.045204 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:54:02.045252 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:54:02.049800 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:54:02.106807 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:54:02.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.120000 audit: BPF prog-id=9 op=LOAD Jun 25 14:54:02.137462 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:54:02.154352 kernel: audit: type=1130 audit(1719327242.113:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.154374 kernel: audit: type=1334 audit(1719327242.120:24): prog-id=9 op=LOAD Jun 25 14:54:02.162887 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:54:02.204290 kernel: BTRFS info (device sda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:54:02.204313 kernel: audit: type=1130 audit(1719327242.181:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.172065 systemd-networkd[835]: lo: Link UP Jun 25 14:54:02.172068 systemd-networkd[835]: lo: Gained carrier Jun 25 14:54:02.172461 systemd-networkd[835]: Enumeration completed Jun 25 14:54:02.173114 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:54:02.173117 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:54:02.175369 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:54:02.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.197510 systemd[1]: Reached target network.target - Network. Jun 25 14:54:02.297125 kernel: audit: type=1130 audit(1719327242.250:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.297148 kernel: audit: type=1130 audit(1719327242.277:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.224570 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:54:02.238030 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:54:02.251124 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:54:02.310061 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:54:02.319175 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:54:02.332208 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:54:02.360416 kernel: mlx5_core 4717:00:02.0 enP18199s1: Link up Jun 25 14:54:02.361020 kernel: audit: type=1130 audit(1719327242.341:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.361078 iscsid[848]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:54:02.361078 iscsid[848]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jun 25 14:54:02.361078 iscsid[848]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:54:02.361078 iscsid[848]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:54:02.361078 iscsid[848]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:54:02.361078 iscsid[848]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:54:02.361078 iscsid[848]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:54:02.484018 kernel: hv_netvsc 002248b8-754d-0022-48b8-754d002248b8 eth0: Data path switched to VF: enP18199s1 Jun 25 14:54:02.484174 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:54:02.484203 kernel: audit: type=1130 audit(1719327242.410:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.348179 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:54:02.380478 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:54:02.410444 systemd-networkd[835]: enP18199s1: Link UP Jun 25 14:54:02.410518 systemd-networkd[835]: eth0: Link UP Jun 25 14:54:02.532453 kernel: audit: type=1130 audit(1719327242.512:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:02.410630 systemd-networkd[835]: eth0: Gained carrier Jun 25 14:54:02.410638 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:54:02.411091 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:54:02.445232 systemd-networkd[835]: enP18199s1: Gained carrier Jun 25 14:54:02.445921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:54:02.456847 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:54:02.468284 systemd-networkd[835]: eth0: DHCPv4 address 10.200.20.26/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:54:02.488979 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:54:02.503312 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:54:03.557287 systemd-networkd[835]: eth0: Gained IPv6LL Jun 25 14:54:03.811140 ignition[847]: Ignition 2.15.0 Jun 25 14:54:03.811155 ignition[847]: Stage: fetch-offline Jun 25 14:54:03.811207 ignition[847]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:54:03.815205 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:54:03.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:03.811226 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:54:03.857825 kernel: audit: type=1130 audit(1719327243.829:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:03.811323 ignition[847]: parsed url from cmdline: "" Jun 25 14:54:03.811327 ignition[847]: no config URL provided Jun 25 14:54:03.811332 ignition[847]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:54:03.861258 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 14:54:03.811339 ignition[847]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:54:03.811344 ignition[847]: failed to fetch config: resource requires networking Jun 25 14:54:03.811560 ignition[847]: Ignition finished successfully Jun 25 14:54:03.870930 ignition[867]: Ignition 2.15.0 Jun 25 14:54:03.870937 ignition[867]: Stage: fetch Jun 25 14:54:03.871113 ignition[867]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:54:03.871124 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:54:03.871234 ignition[867]: parsed url from cmdline: "" Jun 25 14:54:03.871240 ignition[867]: no config URL provided Jun 25 14:54:03.871245 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:54:03.871254 ignition[867]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:54:03.871285 ignition[867]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 14:54:03.983226 ignition[867]: GET result: OK Jun 25 14:54:03.983322 ignition[867]: config has been read from IMDS userdata Jun 25 14:54:03.983380 ignition[867]: parsing config with SHA512: 93b2984a89259db6807030cf6597cbc3818a700a3b534b25aadbcc84190c2274a8a704df80cfc0bae4d10e5d85f1c4aa0133951161aa8715dd21af4a56e9e1b8 Jun 25 14:54:03.987766 unknown[867]: fetched base config from "system" Jun 25 14:54:03.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:03.988253 ignition[867]: fetch: fetch complete Jun 25 14:54:03.987775 unknown[867]: fetched base config from "system" Jun 25 14:54:03.988258 ignition[867]: fetch: fetch passed Jun 25 14:54:03.987780 unknown[867]: fetched user config from "azure" Jun 25 14:54:03.988302 ignition[867]: Ignition finished successfully Jun 25 14:54:03.992264 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 14:54:04.029906 ignition[874]: Ignition 2.15.0 Jun 25 14:54:04.004093 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:54:04.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:04.029913 ignition[874]: Stage: kargs Jun 25 14:54:04.036254 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:54:04.030062 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:54:04.054041 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:54:04.030071 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:54:04.079432 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:54:04.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:04.031284 ignition[874]: kargs: kargs passed Jun 25 14:54:04.090297 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:54:04.031344 ignition[874]: Ignition finished successfully Jun 25 14:54:04.101174 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:54:04.076662 ignition[880]: Ignition 2.15.0 Jun 25 14:54:04.110844 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:54:04.076669 ignition[880]: Stage: disks Jun 25 14:54:04.122981 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:54:04.076785 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:54:04.132668 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:54:04.076794 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:54:04.160830 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:54:04.077749 ignition[880]: disks: disks passed Jun 25 14:54:04.077794 ignition[880]: Ignition finished successfully Jun 25 14:54:04.278396 systemd-fsck[888]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 14:54:04.287096 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:54:04.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:04.301376 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:54:04.359247 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:54:04.359580 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:54:04.364130 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:54:04.439299 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:54:04.449115 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:54:04.479201 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (897) Jun 25 14:54:04.479224 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:54:04.454886 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 14:54:04.510276 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:54:04.510298 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:54:04.468312 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:54:04.468352 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:54:04.499306 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:54:04.536372 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:54:04.542887 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:54:05.711966 coreos-metadata[899]: Jun 25 14:54:05.711 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 14:54:05.721653 coreos-metadata[899]: Jun 25 14:54:05.721 INFO Fetch successful Jun 25 14:54:05.727207 coreos-metadata[899]: Jun 25 14:54:05.721 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 14:54:05.738180 coreos-metadata[899]: Jun 25 14:54:05.736 INFO Fetch successful Jun 25 14:54:05.766409 coreos-metadata[899]: Jun 25 14:54:05.766 INFO wrote hostname ci-3815.2.4-a-2c7c8223bb to /sysroot/etc/hostname Jun 25 14:54:05.775534 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 14:54:05.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:06.057073 initrd-setup-root[925]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:54:06.192129 initrd-setup-root[932]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:54:06.201482 initrd-setup-root[939]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:54:06.210526 initrd-setup-root[946]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:54:07.735585 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:54:07.751080 kernel: kauditd_printk_skb: 5 callbacks suppressed Jun 25 14:54:07.751102 kernel: audit: type=1130 audit(1719327247.741:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:07.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:07.769320 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:54:07.775410 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:54:07.789497 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:54:07.806267 kernel: BTRFS info (device sda6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:54:07.827398 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:54:07.853785 kernel: audit: type=1130 audit(1719327247.832:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:07.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:07.854401 ignition[1013]: INFO : Ignition 2.15.0 Jun 25 14:54:07.854401 ignition[1013]: INFO : Stage: mount Jun 25 14:54:07.868055 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:54:07.868055 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:54:07.868055 ignition[1013]: INFO : mount: mount passed Jun 25 14:54:07.868055 ignition[1013]: INFO : Ignition finished successfully Jun 25 14:54:07.907481 kernel: audit: type=1130 audit(1719327247.868:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:07.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:07.862319 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:54:07.903412 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:54:07.914578 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:54:07.948443 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1025) Jun 25 14:54:07.948505 kernel: BTRFS info (device sda6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:54:07.954567 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:54:07.958887 kernel: BTRFS info (device sda6): using free space tree Jun 25 14:54:07.962858 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:54:07.990238 ignition[1043]: INFO : Ignition 2.15.0 Jun 25 14:54:07.994251 ignition[1043]: INFO : Stage: files Jun 25 14:54:07.994251 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:54:07.994251 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:54:07.994251 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:54:08.054669 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:54:08.054669 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:54:08.219596 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:54:08.226848 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:54:08.226848 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:54:08.220043 unknown[1043]: wrote ssh authorized keys file for user: core Jun 25 14:54:08.246203 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 14:54:08.246203 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 14:54:08.246203 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:54:08.246203 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:54:08.354183 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 14:54:08.576356 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:54:08.586996 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:54:08.597246 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 14:54:08.948836 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 14:54:09.164110 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:54:09.176129 ignition[1043]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 14:54:09.235503 ignition[1043]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:54:09.249211 ignition[1043]: INFO : files: files passed Jun 25 14:54:09.249211 ignition[1043]: INFO : Ignition finished successfully Jun 25 14:54:09.453416 kernel: audit: type=1130 audit(1719327249.254:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.453446 kernel: audit: type=1130 audit(1719327249.339:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.453456 kernel: audit: type=1131 audit(1719327249.354:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.453466 kernel: audit: type=1130 audit(1719327249.391:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.249139 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:54:09.289665 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:54:09.296697 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:54:09.310362 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:54:09.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.503038 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:54:09.526697 kernel: audit: type=1130 audit(1719327249.483:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.526720 kernel: audit: type=1131 audit(1719327249.504:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.310485 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:54:09.538889 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:54:09.538889 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:54:09.385567 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:54:09.392335 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:54:09.447597 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:54:09.468838 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:54:09.468938 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:54:09.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.504745 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:54:09.627882 kernel: audit: type=1130 audit(1719327249.599:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.532893 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:54:09.544583 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:54:09.569014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:54:09.591892 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:54:09.647463 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:54:09.668088 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:54:09.680146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:54:09.685905 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:54:09.697174 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:54:09.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.697247 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:54:09.707491 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:54:09.717588 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:54:09.728447 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:54:09.739162 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:54:09.749274 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:54:09.760381 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:54:09.771697 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:54:09.783745 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:54:09.794390 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:54:09.806031 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:54:09.817136 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:54:09.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.826348 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:54:09.826414 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:54:09.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.838262 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:54:09.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.848053 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:54:09.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.848110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:54:09.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.858851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:54:09.858890 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:54:09.869923 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:54:09.869957 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:54:09.880715 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 14:54:09.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.943881 ignition[1087]: INFO : Ignition 2.15.0 Jun 25 14:54:09.943881 ignition[1087]: INFO : Stage: umount Jun 25 14:54:09.943881 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:54:09.943881 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 14:54:09.943881 ignition[1087]: INFO : umount: umount passed Jun 25 14:54:09.943881 ignition[1087]: INFO : Ignition finished successfully Jun 25 14:54:09.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.880752 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 14:54:09.909288 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:54:09.921201 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:54:10.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.928691 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:54:09.928794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:54:09.938139 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:54:09.938310 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:54:09.950901 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:54:09.951001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:54:09.961553 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:54:10.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.961962 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:54:09.962047 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:54:09.969456 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:54:10.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.969495 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:54:10.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.980800 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:54:10.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.162000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:54:09.980837 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:54:09.992825 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 14:54:10.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:09.992863 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 14:54:10.002373 systemd[1]: Stopped target network.target - Network. Jun 25 14:54:10.020377 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:54:10.020428 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:54:10.031264 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:54:10.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.040945 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:54:10.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.046031 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:54:10.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.052450 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:54:10.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.063359 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:54:10.073775 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:54:10.073805 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:54:10.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.084399 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:54:10.084420 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:54:10.094450 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:54:10.094491 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:54:10.106250 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:54:10.117323 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:54:10.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.127688 systemd-networkd[835]: eth0: DHCPv6 lease lost Jun 25 14:54:10.340000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:54:10.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.128890 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:54:10.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.128980 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:54:10.374111 kernel: hv_netvsc 002248b8-754d-0022-48b8-754d002248b8 eth0: Data path switched from VF: enP18199s1 Jun 25 14:54:10.139963 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:54:10.140049 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:54:10.151017 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:54:10.151103 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:54:10.162283 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:54:10.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.162319 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:54:10.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.172470 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:54:10.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.172516 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:54:10.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.200467 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:54:10.209433 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:54:10.209513 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:54:10.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:10.222203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:54:10.222246 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:54:10.238309 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:54:10.238349 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:54:10.244783 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:54:10.244820 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:54:10.264208 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:54:10.276459 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:54:10.276533 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:54:10.277087 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:54:10.277222 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:54:10.302535 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:54:10.302588 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:54:10.312165 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:54:10.312218 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:54:10.322998 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:54:10.323057 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:54:10.334550 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:54:10.334589 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:54:10.346309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:54:10.346343 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:54:10.385349 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:54:10.398350 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 14:54:10.398419 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:54:10.414922 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:54:10.414972 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:54:10.421167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:54:10.421219 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:54:10.433875 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 14:54:10.434348 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:54:10.434422 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:54:10.456596 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:54:10.456699 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:54:10.467303 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:54:10.495229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:54:10.965859 systemd[1]: Switching root. Jun 25 14:54:10.969000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:54:10.969000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:54:10.969000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:54:10.970000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:54:10.970000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:54:10.984177 iscsid[848]: iscsid shutting down. Jun 25 14:54:10.987590 systemd-journald[208]: Received SIGTERM from PID 1 (n/a). Jun 25 14:54:10.987651 systemd-journald[208]: Journal stopped Jun 25 14:54:17.987479 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:54:17.987501 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:54:17.987511 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:54:17.987522 kernel: SELinux: policy capability open_perms=1 Jun 25 14:54:17.987530 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:54:17.987538 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:54:17.987547 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:54:17.987555 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:54:17.987564 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:54:17.987572 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:54:17.987583 kernel: kauditd_printk_skb: 42 callbacks suppressed Jun 25 14:54:17.987593 kernel: audit: type=1403 audit(1719327253.471:89): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 14:54:17.987604 systemd[1]: Successfully loaded SELinux policy in 215.283ms. Jun 25 14:54:17.987615 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.288ms. Jun 25 14:54:17.987631 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:54:17.987646 systemd[1]: Detected virtualization microsoft. Jun 25 14:54:17.987655 systemd[1]: Detected architecture arm64. Jun 25 14:54:17.987670 systemd[1]: Detected first boot. Jun 25 14:54:17.987681 systemd[1]: Hostname set to . Jun 25 14:54:17.987691 systemd[1]: Initializing machine ID from random generator. Jun 25 14:54:17.987701 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:54:17.987711 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:54:17.987723 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 14:54:17.987735 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:54:17.987747 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:54:17.987757 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:54:17.987767 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:54:17.987781 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:54:17.987799 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:54:17.987810 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:54:17.987819 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:54:17.987829 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:54:17.987838 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:54:17.987848 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:54:17.987857 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:54:17.987867 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:54:17.987876 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:54:17.987887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:54:17.987897 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:54:17.987906 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:54:17.987918 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:54:17.987928 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:54:17.987938 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:54:17.987948 kernel: audit: type=1400 audit(1719327257.190:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jun 25 14:54:17.987960 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:54:17.987970 kernel: audit: type=1335 audit(1719327257.190:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 14:54:17.987979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:54:17.987989 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:54:17.987998 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:54:17.988008 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:54:17.988017 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:54:17.988027 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:54:17.988038 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:54:17.988048 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:54:17.988058 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:54:17.988067 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:54:17.988079 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:54:17.988088 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:54:17.988098 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:54:17.988109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:54:17.988119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:54:17.988128 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:54:17.988138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:54:17.988148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:54:17.988159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:54:17.988169 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:54:17.988179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:54:17.991983 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:54:17.992013 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 14:54:17.992033 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jun 25 14:54:17.992052 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:54:17.992071 kernel: loop: module loaded Jun 25 14:54:17.992086 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:54:17.992098 kernel: fuse: init (API version 7.37) Jun 25 14:54:17.992108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:54:17.992118 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:54:17.992127 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:54:17.992137 kernel: audit: type=1305 audit(1719327257.974:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:54:17.992146 kernel: ACPI: bus type drm_connector registered Jun 25 14:54:17.992158 systemd-journald[1247]: Journal started Jun 25 14:54:17.992214 systemd-journald[1247]: Runtime Journal (/run/log/journal/d4ba4dd85a3f451da721b0dfdcfe5799) is 8.0M, max 78.6M, 70.6M free. Jun 25 14:54:17.190000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 14:54:17.974000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:54:18.000423 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:54:18.000473 kernel: audit: type=1300 audit(1719327257.974:92): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff35391d0 a2=4000 a3=1 items=0 ppid=1 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:17.974000 audit[1247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff35391d0 a2=4000 a3=1 items=0 ppid=1 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:17.974000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:54:18.037814 kernel: audit: type=1327 audit(1719327257.974:92): proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:54:18.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.042158 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:54:18.062928 kernel: audit: type=1130 audit(1719327258.041:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.063407 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:54:18.069479 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:54:18.074768 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:54:18.080819 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:54:18.086787 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:54:18.092431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:54:18.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.098957 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:54:18.099236 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:54:18.122808 kernel: audit: type=1130 audit(1719327258.098:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.123598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:54:18.123835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:54:18.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.154658 kernel: audit: type=1130 audit(1719327258.122:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.154693 kernel: audit: type=1131 audit(1719327258.122:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.160912 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:54:18.161130 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:54:18.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.167322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:54:18.167571 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:54:18.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.173784 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:54:18.173981 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:54:18.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.180383 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:54:18.180643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:54:18.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.187096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:54:18.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.193572 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:54:18.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.200140 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:54:18.211316 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:54:18.218581 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:54:18.224155 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:54:18.225979 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:54:18.232692 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:54:18.237954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:54:18.239291 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:54:18.244663 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:54:18.245980 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:54:18.252038 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:54:18.351165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:54:18.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.369428 systemd-journald[1247]: Time spent on flushing to /var/log/journal/d4ba4dd85a3f451da721b0dfdcfe5799 is 13.685ms for 1004 entries. Jun 25 14:54:18.369428 systemd-journald[1247]: System Journal (/var/log/journal/d4ba4dd85a3f451da721b0dfdcfe5799) is 8.0M, max 2.6G, 2.6G free. Jun 25 14:54:18.472027 systemd-journald[1247]: Received client request to flush runtime journal. Jun 25 14:54:18.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.361344 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:54:18.410664 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:54:18.473029 udevadm[1274]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 14:54:18.421500 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:54:18.456924 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:54:18.466908 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:54:18.474086 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:54:18.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.483786 kernel: kauditd_printk_skb: 15 callbacks suppressed Jun 25 14:54:18.483828 kernel: audit: type=1130 audit(1719327258.479:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.505550 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:54:18.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.533473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:54:18.539226 kernel: audit: type=1130 audit(1719327258.511:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.557039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:54:18.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.570918 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:54:18.586446 kernel: audit: type=1130 audit(1719327258.563:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.607208 kernel: audit: type=1130 audit(1719327258.588:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.609461 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:54:18.671977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:54:18.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:18.698244 kernel: audit: type=1130 audit(1719327258.678:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.293385 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:54:20.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.318211 kernel: audit: type=1130 audit(1719327260.299:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.323428 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:54:20.355763 systemd-udevd[1293]: Using default interface naming scheme 'v252'. Jun 25 14:54:20.634265 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:54:20.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.663374 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:54:20.672210 kernel: audit: type=1130 audit(1719327260.640:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.679364 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:54:20.718005 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jun 25 14:54:20.728068 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:54:20.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.759220 kernel: audit: type=1130 audit(1719327260.734:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.803212 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 14:54:20.803288 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1301) Jun 25 14:54:20.820213 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 14:54:20.820285 kernel: hv_vmbus: registering driver hv_balloon Jun 25 14:54:20.834412 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 14:54:20.834504 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 14:54:20.844263 kernel: Console: switching to colour dummy device 80x25 Jun 25 14:54:20.855105 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 14:54:20.855328 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 25 14:54:20.857226 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 14:54:20.868009 systemd-networkd[1299]: lo: Link UP Jun 25 14:54:20.868021 systemd-networkd[1299]: lo: Gained carrier Jun 25 14:54:20.868515 systemd-networkd[1299]: Enumeration completed Jun 25 14:54:20.868643 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:54:20.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.894759 systemd-networkd[1299]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:54:20.894771 systemd-networkd[1299]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:54:20.904042 kernel: audit: type=1130 audit(1719327260.874:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:20.898396 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:54:20.952209 kernel: mlx5_core 4717:00:02.0 enP18199s1: Link up Jun 25 14:54:20.980316 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 14:54:20.980391 kernel: hv_netvsc 002248b8-754d-0022-48b8-754d002248b8 eth0: Data path switched to VF: enP18199s1 Jun 25 14:54:20.980538 kernel: hv_vmbus: registering driver hv_utils Jun 25 14:54:20.981375 systemd-networkd[1299]: enP18199s1: Link UP Jun 25 14:54:20.981768 systemd-networkd[1299]: eth0: Link UP Jun 25 14:54:20.981849 systemd-networkd[1299]: eth0: Gained carrier Jun 25 14:54:20.981932 systemd-networkd[1299]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:54:20.984252 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 14:54:20.990062 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 14:54:20.991480 systemd-networkd[1299]: enP18199s1: Gained carrier Jun 25 14:54:20.994199 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 14:54:20.949444 systemd-networkd[1299]: eth0: DHCPv4 address 10.200.20.26/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:54:21.606530 systemd-journald[1247]: Time jumped backwards, rotating. Jun 25 14:54:21.606632 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1301) Jun 25 14:54:21.606653 kernel: audit: type=1130 audit(1719327261.053:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:21.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:21.041018 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 14:54:21.047620 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:54:21.071660 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:54:21.611547 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:54:21.670227 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:54:21.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:21.676618 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:54:21.692467 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:54:21.697951 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:54:21.718246 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:54:21.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:21.724990 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:54:21.731116 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:54:21.731246 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:54:21.736884 systemd[1]: Reached target machines.target - Containers. Jun 25 14:54:21.749746 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:54:21.755786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:54:21.755966 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:54:21.757748 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:54:21.768177 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:54:21.775924 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:54:21.783425 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:54:22.115000 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1382 (bootctl) Jun 25 14:54:22.122470 kernel: loop0: detected capacity change from 0 to 59648 Jun 25 14:54:22.122529 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:54:22.128655 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:54:22.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:22.835443 systemd-networkd[1299]: eth0: Gained IPv6LL Jun 25 14:54:22.842373 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:54:22.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:23.268607 systemd-fsck[1390]: fsck.fat 4.2 (2021-01-31) Jun 25 14:54:23.268607 systemd-fsck[1390]: /dev/sda1: 242 files, 114659/258078 clusters Jun 25 14:54:23.270804 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:54:23.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:23.283973 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:54:23.454749 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:54:23.469528 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:54:23.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:23.479395 kernel: kauditd_printk_skb: 5 callbacks suppressed Jun 25 14:54:23.479457 kernel: audit: type=1130 audit(1719327263.474:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:25.859308 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:54:25.888307 kernel: loop1: detected capacity change from 0 to 55744 Jun 25 14:54:28.417948 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:54:28.419090 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:54:28.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:28.444455 kernel: audit: type=1130 audit(1719327268.425:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:28.488316 kernel: loop2: detected capacity change from 0 to 113264 Jun 25 14:54:28.569316 kernel: loop3: detected capacity change from 0 to 193208 Jun 25 14:54:28.600311 kernel: loop4: detected capacity change from 0 to 59648 Jun 25 14:54:28.609337 kernel: loop5: detected capacity change from 0 to 55744 Jun 25 14:54:28.619620 kernel: loop6: detected capacity change from 0 to 113264 Jun 25 14:54:28.631005 kernel: loop7: detected capacity change from 0 to 193208 Jun 25 14:54:28.635920 (sd-sysext)[1406]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 14:54:28.636981 (sd-sysext)[1406]: Merged extensions into '/usr'. Jun 25 14:54:28.639358 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:54:28.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:28.663415 kernel: audit: type=1130 audit(1719327268.644:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:28.664483 systemd[1]: Starting ensure-sysext.service... Jun 25 14:54:28.670418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:54:28.688148 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:54:28.689778 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:54:28.690048 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:54:28.690585 systemd[1]: Reloading. Jun 25 14:54:28.690771 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:54:28.868444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:54:28.935394 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:54:28.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:28.960502 kernel: audit: type=1130 audit(1719327268.941:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:28.968170 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:54:28.976640 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:54:28.984183 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:54:28.991752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:54:29.001077 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:54:29.008376 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:54:29.019000 audit[1503]: SYSTEM_BOOT pid=1503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.023492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:54:29.038837 kernel: audit: type=1127 audit(1719327269.019:131): pid=1503 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.041270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:54:29.055328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:54:29.066172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:54:29.075220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:54:29.075465 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:54:29.077020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:54:29.077203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:54:29.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.085236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:54:29.085419 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:54:29.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.123329 kernel: audit: type=1130 audit(1719327269.083:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.123447 kernel: audit: type=1131 audit(1719327269.083:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.130654 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:54:29.140348 kernel: audit: type=1130 audit(1719327269.122:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.164602 kernel: audit: type=1131 audit(1719327269.122:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.177193 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:54:29.177422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:54:29.202229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:54:29.202431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:54:29.207783 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:54:29.231414 kernel: audit: type=1130 audit(1719327269.175:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.231511 kernel: audit: type=1130 audit(1719327269.200:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.230931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:54:29.237776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:54:29.245669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:54:29.268772 kernel: audit: type=1131 audit(1719327269.200:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.268824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:54:29.276924 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:54:29.277078 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:54:29.277958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:54:29.278148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:54:29.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.286873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:54:29.287046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:54:29.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.293949 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:54:29.294150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:54:29.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:54:29.301095 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:54:29.299000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:54:29.299000 audit[1527]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc7b0d970 a2=420 a3=0 items=0 ppid=1492 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:54:29.299000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:54:29.303465 augenrules[1527]: No rules Jun 25 14:54:29.307780 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:54:29.314135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:54:29.314233 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:54:29.314826 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:54:29.323712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:54:29.326815 systemd-resolved[1496]: Positive Trust Anchors: Jun 25 14:54:29.326994 systemd-resolved[1496]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:54:29.327021 systemd-resolved[1496]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:54:29.330723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:54:29.331764 systemd-resolved[1496]: Using system hostname 'ci-3815.2.4-a-2c7c8223bb'. Jun 25 14:54:29.339045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:54:29.353865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:54:29.361425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:54:29.367075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:54:29.367216 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:54:29.367932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:54:29.375047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:54:29.375235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:54:29.382674 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:54:29.382936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:54:29.392588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:54:29.392771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:54:29.399763 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:54:29.399968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:54:29.407249 systemd[1]: Reached target network.target - Network. Jun 25 14:54:29.412616 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:54:29.418844 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:54:29.425792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:54:29.425846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:54:29.426414 systemd[1]: Finished ensure-sysext.service. Jun 25 14:54:29.475633 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:54:29.482508 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:54:29.510530 systemd-timesyncd[1502]: Contacted time server 73.193.62.54:123 (0.flatcar.pool.ntp.org). Jun 25 14:54:29.510610 systemd-timesyncd[1502]: Initial clock synchronization to Tue 2024-06-25 14:54:29.494469 UTC. Jun 25 14:54:31.790121 ldconfig[1381]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:54:31.815033 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:54:31.826634 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:54:31.839896 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:54:31.846308 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:54:31.852218 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:54:31.858195 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:54:31.864479 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:54:31.870236 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:54:31.876162 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:54:31.882420 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:54:31.882462 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:54:31.887571 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:54:31.893366 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:54:31.901478 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:54:31.907573 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:54:31.913644 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:54:31.914339 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:54:31.920512 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:54:31.925662 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:54:31.930923 systemd[1]: System is tainted: cgroupsv1 Jun 25 14:54:31.931072 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:54:31.931169 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:54:31.932656 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:54:31.940198 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 14:54:31.947548 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:54:31.954094 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:54:31.961340 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:54:31.962308 jq[1561]: false Jun 25 14:54:31.966781 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:54:31.996472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:54:32.004014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:54:32.010957 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:54:32.017829 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:54:32.025066 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:54:32.032167 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:54:32.041241 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:54:32.051860 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:54:32.051943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:54:32.058658 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:54:32.067144 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:54:32.072676 jq[1585]: true Jun 25 14:54:32.076480 extend-filesystems[1562]: Found loop4 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found loop5 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found loop6 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found loop7 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda1 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda2 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda3 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found usr Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda4 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda6 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda7 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sda9 Jun 25 14:54:32.120728 extend-filesystems[1562]: Checking size of /dev/sda9 Jun 25 14:54:32.120728 extend-filesystems[1562]: Old size kept for /dev/sda9 Jun 25 14:54:32.120728 extend-filesystems[1562]: Found sr0 Jun 25 14:54:32.080749 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:54:32.233894 dbus-daemon[1560]: [system] SELinux support is enabled Jun 25 14:54:32.347188 update_engine[1580]: I0625 14:54:32.188532 1580 main.cc:92] Flatcar Update Engine starting Jun 25 14:54:32.347188 update_engine[1580]: I0625 14:54:32.237920 1580 update_check_scheduler.cc:74] Next update check in 2m23s Jun 25 14:54:32.081018 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:54:32.255849 dbus-daemon[1560]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 14:54:32.086799 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:54:32.087057 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:54:32.350614 tar[1599]: linux-arm64/helm Jun 25 14:54:32.102123 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:54:32.103630 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:54:32.351395 jq[1603]: true Jun 25 14:54:32.136990 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:54:32.137246 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:54:32.145149 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:54:32.211494 systemd-logind[1576]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jun 25 14:54:32.211939 systemd-logind[1576]: New seat seat0. Jun 25 14:54:32.234166 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:54:32.255102 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:54:32.255125 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:54:32.268322 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:54:32.268343 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:54:32.279622 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:54:32.289991 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:54:32.300732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:54:32.312679 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:54:32.402409 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:54:32.403458 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:54:32.413102 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 14:54:32.447696 coreos-metadata[1557]: Jun 25 14:54:32.446 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 14:54:32.452970 coreos-metadata[1557]: Jun 25 14:54:32.452 INFO Fetch successful Jun 25 14:54:32.453100 coreos-metadata[1557]: Jun 25 14:54:32.453 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 14:54:32.457986 coreos-metadata[1557]: Jun 25 14:54:32.457 INFO Fetch successful Jun 25 14:54:32.458449 coreos-metadata[1557]: Jun 25 14:54:32.458 INFO Fetching http://168.63.129.16/machine/531f8ab4-7426-4bba-af69-5dd196ff4b24/a91267c1%2D2775%2D47c4%2D95b8%2D04191355371b.%5Fci%2D3815.2.4%2Da%2D2c7c8223bb?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 14:54:32.464110 coreos-metadata[1557]: Jun 25 14:54:32.464 INFO Fetch successful Jun 25 14:54:32.464462 coreos-metadata[1557]: Jun 25 14:54:32.464 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 14:54:32.480207 coreos-metadata[1557]: Jun 25 14:54:32.478 INFO Fetch successful Jun 25 14:54:32.513325 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1630) Jun 25 14:54:32.524986 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 14:54:32.541777 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:54:32.590518 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:54:32.907813 tar[1599]: linux-arm64/LICENSE Jun 25 14:54:32.908029 tar[1599]: linux-arm64/README.md Jun 25 14:54:32.917524 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:54:33.043529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:33.103599 containerd[1604]: time="2024-06-25T14:54:33.103499394Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:54:33.153383 containerd[1604]: time="2024-06-25T14:54:33.153336068Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:54:33.153903 containerd[1604]: time="2024-06-25T14:54:33.153883618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:54:33.155816 containerd[1604]: time="2024-06-25T14:54:33.155782179Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:54:33.155904 containerd[1604]: time="2024-06-25T14:54:33.155890130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:54:33.156265 containerd[1604]: time="2024-06-25T14:54:33.156238804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:54:33.157111 containerd[1604]: time="2024-06-25T14:54:33.157063447Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:54:33.157199 containerd[1604]: time="2024-06-25T14:54:33.157175315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:54:33.157253 containerd[1604]: time="2024-06-25T14:54:33.157233027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:54:33.157309 containerd[1604]: time="2024-06-25T14:54:33.157251572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:54:33.157351 containerd[1604]: time="2024-06-25T14:54:33.157331427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:54:33.157564 containerd[1604]: time="2024-06-25T14:54:33.157537537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:54:33.157615 containerd[1604]: time="2024-06-25T14:54:33.157563876Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:54:33.157615 containerd[1604]: time="2024-06-25T14:54:33.157574987Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:54:33.158805 containerd[1604]: time="2024-06-25T14:54:33.157734376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:54:33.158805 containerd[1604]: time="2024-06-25T14:54:33.158762371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:54:33.158898 containerd[1604]: time="2024-06-25T14:54:33.158845024Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:54:33.158898 containerd[1604]: time="2024-06-25T14:54:33.158857413Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178533935Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178582375Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178599521Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178641087Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178656554Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178667865Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178680414Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178920018Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178947475Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178960784Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178975452Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.178988961Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.179010783Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181036 containerd[1604]: time="2024-06-25T14:54:33.179024932Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179038161Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179052589Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179065498Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179077528Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179089998Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179187078Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179530117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179559612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179573561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179596782Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179653415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179668603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179686149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181415 containerd[1604]: time="2024-06-25T14:54:33.179699378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179711648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179723238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179735508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179746859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179759169Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179906568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179924073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179937102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179949412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179962162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179975871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179988061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181670 containerd[1604]: time="2024-06-25T14:54:33.179999571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:54:33.181903 containerd[1604]: time="2024-06-25T14:54:33.180233019Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:54:33.181903 containerd[1604]: time="2024-06-25T14:54:33.180309836Z" level=info msg="Connect containerd service" Jun 25 14:54:33.181903 containerd[1604]: time="2024-06-25T14:54:33.180346406Z" level=info msg="using legacy CRI server" Jun 25 14:54:33.181903 containerd[1604]: time="2024-06-25T14:54:33.180354240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:54:33.181903 containerd[1604]: time="2024-06-25T14:54:33.180380378Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:54:33.181903 containerd[1604]: time="2024-06-25T14:54:33.180902789Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:54:33.182972 containerd[1604]: time="2024-06-25T14:54:33.182676093Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:54:33.182972 containerd[1604]: time="2024-06-25T14:54:33.182714701Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:54:33.182972 containerd[1604]: time="2024-06-25T14:54:33.182726172Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:54:33.182972 containerd[1604]: time="2024-06-25T14:54:33.182736683Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:54:33.183252 containerd[1604]: time="2024-06-25T14:54:33.183231957Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:54:33.183388 containerd[1604]: time="2024-06-25T14:54:33.183372641Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:54:33.183535 containerd[1604]: time="2024-06-25T14:54:33.183507890Z" level=info msg="Start subscribing containerd event" Jun 25 14:54:33.183624 containerd[1604]: time="2024-06-25T14:54:33.183610086Z" level=info msg="Start recovering state" Jun 25 14:54:33.183736 containerd[1604]: time="2024-06-25T14:54:33.183723153Z" level=info msg="Start event monitor" Jun 25 14:54:33.183798 containerd[1604]: time="2024-06-25T14:54:33.183784303Z" level=info msg="Start snapshots syncer" Jun 25 14:54:33.183853 containerd[1604]: time="2024-06-25T14:54:33.183840937Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:54:33.183907 containerd[1604]: time="2024-06-25T14:54:33.183895372Z" level=info msg="Start streaming server" Jun 25 14:54:33.184032 containerd[1604]: time="2024-06-25T14:54:33.184016672Z" level=info msg="containerd successfully booted in 0.083344s" Jun 25 14:54:33.184123 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:54:33.510977 kubelet[1689]: E0625 14:54:33.510850 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:54:33.513185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:54:33.513360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:54:34.488957 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:54:34.506661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:54:34.518647 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:54:34.524715 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 14:54:34.530479 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:54:34.530684 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:54:34.540368 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:54:34.546660 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 14:54:34.559914 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:54:34.571894 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:54:34.578890 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 14:54:34.585414 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:54:34.590955 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:54:34.604782 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:54:34.618839 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:54:34.619077 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:54:34.625796 systemd[1]: Startup finished in 17.154s (kernel) + 21.418s (userspace) = 38.573s. Jun 25 14:54:34.771813 login[1725]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jun 25 14:54:34.774257 login[1726]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 14:54:34.781878 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:54:34.792580 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:54:34.796134 systemd-logind[1576]: New session 1 of user core. Jun 25 14:54:34.804051 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:54:34.810696 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:54:34.815895 (systemd)[1732]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:54:34.910456 systemd[1732]: Queued start job for default target default.target. Jun 25 14:54:34.911024 systemd[1732]: Reached target paths.target - Paths. Jun 25 14:54:34.911053 systemd[1732]: Reached target sockets.target - Sockets. Jun 25 14:54:34.911063 systemd[1732]: Reached target timers.target - Timers. Jun 25 14:54:34.911072 systemd[1732]: Reached target basic.target - Basic System. Jun 25 14:54:34.911190 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:54:34.912397 systemd[1732]: Reached target default.target - Main User Target. Jun 25 14:54:34.912442 systemd[1732]: Startup finished in 90ms. Jun 25 14:54:34.917560 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:54:35.772164 login[1725]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 14:54:35.776326 systemd-logind[1576]: New session 2 of user core. Jun 25 14:54:35.779571 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:54:35.847149 waagent[1723]: 2024-06-25T14:54:35.847062Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 14:54:35.853247 waagent[1723]: 2024-06-25T14:54:35.853183Z INFO Daemon Daemon OS: flatcar 3815.2.4 Jun 25 14:54:35.857896 waagent[1723]: 2024-06-25T14:54:35.857846Z INFO Daemon Daemon Python: 3.11.6 Jun 25 14:54:35.862327 waagent[1723]: 2024-06-25T14:54:35.862255Z INFO Daemon Daemon Run daemon Jun 25 14:54:35.866384 waagent[1723]: 2024-06-25T14:54:35.866344Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3815.2.4' Jun 25 14:54:35.875997 waagent[1723]: 2024-06-25T14:54:35.875925Z INFO Daemon Daemon Using waagent for provisioning Jun 25 14:54:35.881555 waagent[1723]: 2024-06-25T14:54:35.881508Z INFO Daemon Daemon Activate resource disk Jun 25 14:54:35.886488 waagent[1723]: 2024-06-25T14:54:35.886439Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 14:54:35.897679 waagent[1723]: 2024-06-25T14:54:35.897629Z INFO Daemon Daemon Found device: None Jun 25 14:54:35.902301 waagent[1723]: 2024-06-25T14:54:35.902246Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 14:54:35.910608 waagent[1723]: 2024-06-25T14:54:35.910561Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 14:54:35.921941 waagent[1723]: 2024-06-25T14:54:35.921898Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 14:54:35.927577 waagent[1723]: 2024-06-25T14:54:35.927535Z INFO Daemon Daemon Running default provisioning handler Jun 25 14:54:35.939643 waagent[1723]: 2024-06-25T14:54:35.939588Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jun 25 14:54:35.953358 waagent[1723]: 2024-06-25T14:54:35.953297Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 14:54:35.963543 waagent[1723]: 2024-06-25T14:54:35.963467Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 14:54:35.968672 waagent[1723]: 2024-06-25T14:54:35.968612Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 14:54:36.060989 waagent[1723]: 2024-06-25T14:54:36.060859Z INFO Daemon Daemon Successfully mounted dvd Jun 25 14:54:36.079325 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 14:54:36.107892 waagent[1723]: 2024-06-25T14:54:36.107820Z INFO Daemon Daemon Detect protocol endpoint Jun 25 14:54:36.113151 waagent[1723]: 2024-06-25T14:54:36.113089Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 14:54:36.119301 waagent[1723]: 2024-06-25T14:54:36.119231Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 14:54:36.126408 waagent[1723]: 2024-06-25T14:54:36.126357Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 14:54:36.132141 waagent[1723]: 2024-06-25T14:54:36.132094Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 14:54:36.138243 waagent[1723]: 2024-06-25T14:54:36.138188Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 14:54:36.153968 waagent[1723]: 2024-06-25T14:54:36.153918Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 14:54:36.161312 waagent[1723]: 2024-06-25T14:54:36.161271Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 14:54:36.167194 waagent[1723]: 2024-06-25T14:54:36.167148Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 14:54:36.516859 waagent[1723]: 2024-06-25T14:54:36.516730Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 14:54:36.524027 waagent[1723]: 2024-06-25T14:54:36.523966Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 14:54:36.535171 waagent[1723]: 2024-06-25T14:54:36.535122Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 14:54:36.571625 waagent[1723]: 2024-06-25T14:54:36.571581Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 14:54:36.578012 waagent[1723]: 2024-06-25T14:54:36.577971Z INFO Daemon Jun 25 14:54:36.581678 waagent[1723]: 2024-06-25T14:54:36.581636Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e3082976-6f32-41b9-bcfa-04bbba6ff26a eTag: 10454322376089095985 source: Fabric] Jun 25 14:54:36.593864 waagent[1723]: 2024-06-25T14:54:36.593822Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 14:54:36.601062 waagent[1723]: 2024-06-25T14:54:36.601022Z INFO Daemon Jun 25 14:54:36.604332 waagent[1723]: 2024-06-25T14:54:36.604277Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 14:54:36.616339 waagent[1723]: 2024-06-25T14:54:36.616304Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 14:54:36.706164 waagent[1723]: 2024-06-25T14:54:36.706095Z INFO Daemon Downloaded certificate {'thumbprint': 'CF944E266C510E354103CC193FA53C0615CED44D', 'hasPrivateKey': True} Jun 25 14:54:36.716540 waagent[1723]: 2024-06-25T14:54:36.716493Z INFO Daemon Downloaded certificate {'thumbprint': '95B049FC546F050E23A96FCE890FB9D0AF96FA05', 'hasPrivateKey': False} Jun 25 14:54:36.727153 waagent[1723]: 2024-06-25T14:54:36.727108Z INFO Daemon Fetch goal state completed Jun 25 14:54:36.739968 waagent[1723]: 2024-06-25T14:54:36.739927Z INFO Daemon Daemon Starting provisioning Jun 25 14:54:36.744774 waagent[1723]: 2024-06-25T14:54:36.744730Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 14:54:36.749332 waagent[1723]: 2024-06-25T14:54:36.749276Z INFO Daemon Daemon Set hostname [ci-3815.2.4-a-2c7c8223bb] Jun 25 14:54:36.791178 waagent[1723]: 2024-06-25T14:54:36.791107Z INFO Daemon Daemon Publish hostname [ci-3815.2.4-a-2c7c8223bb] Jun 25 14:54:36.797959 waagent[1723]: 2024-06-25T14:54:36.797897Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 14:54:36.805675 waagent[1723]: 2024-06-25T14:54:36.805620Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 14:54:37.044683 systemd-networkd[1299]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:54:37.044692 systemd-networkd[1299]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:54:37.044720 systemd-networkd[1299]: eth0: DHCP lease lost Jun 25 14:54:37.046140 waagent[1723]: 2024-06-25T14:54:37.046056Z INFO Daemon Daemon Create user account if not exists Jun 25 14:54:37.052071 waagent[1723]: 2024-06-25T14:54:37.052009Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 14:54:37.052359 systemd-networkd[1299]: eth0: DHCPv6 lease lost Jun 25 14:54:37.058031 waagent[1723]: 2024-06-25T14:54:37.057966Z INFO Daemon Daemon Configure sudoer Jun 25 14:54:37.062564 waagent[1723]: 2024-06-25T14:54:37.062502Z INFO Daemon Daemon Configure sshd Jun 25 14:54:37.067064 waagent[1723]: 2024-06-25T14:54:37.067002Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 14:54:37.079916 waagent[1723]: 2024-06-25T14:54:37.079853Z INFO Daemon Daemon Deploy ssh public key. Jun 25 14:54:37.091163 systemd-networkd[1299]: eth0: DHCPv4 address 10.200.20.26/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 14:54:38.308028 waagent[1723]: 2024-06-25T14:54:38.307981Z INFO Daemon Daemon Provisioning complete Jun 25 14:54:38.328344 waagent[1723]: 2024-06-25T14:54:38.328275Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 14:54:38.334332 waagent[1723]: 2024-06-25T14:54:38.334274Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 14:54:38.343730 waagent[1723]: 2024-06-25T14:54:38.343682Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 14:54:38.476525 waagent[1781]: 2024-06-25T14:54:38.476452Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 14:54:38.476957 waagent[1781]: 2024-06-25T14:54:38.476917Z INFO ExtHandler ExtHandler OS: flatcar 3815.2.4 Jun 25 14:54:38.477097 waagent[1781]: 2024-06-25T14:54:38.477065Z INFO ExtHandler ExtHandler Python: 3.11.6 Jun 25 14:54:38.570793 waagent[1781]: 2024-06-25T14:54:38.570669Z INFO ExtHandler ExtHandler Distro: flatcar-3815.2.4; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.6; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 14:54:38.571115 waagent[1781]: 2024-06-25T14:54:38.571078Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:54:38.571268 waagent[1781]: 2024-06-25T14:54:38.571235Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:54:38.578026 waagent[1781]: 2024-06-25T14:54:38.577971Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 14:54:38.584870 waagent[1781]: 2024-06-25T14:54:38.584829Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 14:54:38.585502 waagent[1781]: 2024-06-25T14:54:38.585462Z INFO ExtHandler Jun 25 14:54:38.585656 waagent[1781]: 2024-06-25T14:54:38.585625Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: b1114429-d0bc-4b2c-83ce-0407c0a8aff9 eTag: 10454322376089095985 source: Fabric] Jun 25 14:54:38.586038 waagent[1781]: 2024-06-25T14:54:38.586001Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 14:54:38.586733 waagent[1781]: 2024-06-25T14:54:38.586692Z INFO ExtHandler Jun 25 14:54:38.586881 waagent[1781]: 2024-06-25T14:54:38.586849Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 14:54:38.590756 waagent[1781]: 2024-06-25T14:54:38.590725Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 14:54:38.678380 waagent[1781]: 2024-06-25T14:54:38.678276Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CF944E266C510E354103CC193FA53C0615CED44D', 'hasPrivateKey': True} Jun 25 14:54:38.678962 waagent[1781]: 2024-06-25T14:54:38.678923Z INFO ExtHandler Downloaded certificate {'thumbprint': '95B049FC546F050E23A96FCE890FB9D0AF96FA05', 'hasPrivateKey': False} Jun 25 14:54:38.679566 waagent[1781]: 2024-06-25T14:54:38.679523Z INFO ExtHandler Fetch goal state completed Jun 25 14:54:38.698128 waagent[1781]: 2024-06-25T14:54:38.698074Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1781 Jun 25 14:54:38.698426 waagent[1781]: 2024-06-25T14:54:38.698387Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 14:54:38.700242 waagent[1781]: 2024-06-25T14:54:38.700202Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3815.2.4', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 14:54:38.700802 waagent[1781]: 2024-06-25T14:54:38.700764Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 14:54:38.816104 waagent[1781]: 2024-06-25T14:54:38.816065Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 14:54:38.816463 waagent[1781]: 2024-06-25T14:54:38.816419Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 14:54:38.822778 waagent[1781]: 2024-06-25T14:54:38.822707Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 14:54:38.829445 systemd[1]: Reloading. Jun 25 14:54:38.983594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:54:39.060785 waagent[1781]: 2024-06-25T14:54:39.060703Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 14:54:39.065815 systemd[1]: Reloading. Jun 25 14:54:39.228013 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:54:39.298985 waagent[1781]: 2024-06-25T14:54:39.298893Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 14:54:39.299116 waagent[1781]: 2024-06-25T14:54:39.299077Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 14:54:39.873312 waagent[1781]: 2024-06-25T14:54:39.873220Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 14:54:39.873910 waagent[1781]: 2024-06-25T14:54:39.873855Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 14:54:39.874695 waagent[1781]: 2024-06-25T14:54:39.874611Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 14:54:39.875356 waagent[1781]: 2024-06-25T14:54:39.874941Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:54:39.875356 waagent[1781]: 2024-06-25T14:54:39.875036Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:54:39.875356 waagent[1781]: 2024-06-25T14:54:39.875246Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 14:54:39.875525 waagent[1781]: 2024-06-25T14:54:39.875471Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 14:54:39.875525 waagent[1781]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 14:54:39.875525 waagent[1781]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 14:54:39.875525 waagent[1781]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 14:54:39.875525 waagent[1781]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:54:39.875525 waagent[1781]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:54:39.875525 waagent[1781]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 14:54:39.875941 waagent[1781]: 2024-06-25T14:54:39.875892Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 14:54:39.876446 waagent[1781]: 2024-06-25T14:54:39.876382Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 14:54:39.876578 waagent[1781]: 2024-06-25T14:54:39.876532Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 14:54:39.877019 waagent[1781]: 2024-06-25T14:54:39.876958Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 14:54:39.877123 waagent[1781]: 2024-06-25T14:54:39.877086Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 14:54:39.877267 waagent[1781]: 2024-06-25T14:54:39.877221Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 14:54:39.878307 waagent[1781]: 2024-06-25T14:54:39.878245Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 14:54:39.878998 waagent[1781]: 2024-06-25T14:54:39.878954Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 14:54:39.879488 waagent[1781]: 2024-06-25T14:54:39.879438Z INFO EnvHandler ExtHandler Configure routes Jun 25 14:54:39.879839 waagent[1781]: 2024-06-25T14:54:39.879796Z INFO EnvHandler ExtHandler Gateway:None Jun 25 14:54:39.880343 waagent[1781]: 2024-06-25T14:54:39.880278Z INFO EnvHandler ExtHandler Routes:None Jun 25 14:54:39.893020 waagent[1781]: 2024-06-25T14:54:39.892972Z INFO ExtHandler ExtHandler Jun 25 14:54:39.893250 waagent[1781]: 2024-06-25T14:54:39.893202Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7087b9d7-bf11-41a1-9d79-28619332531b correlation cc688540-9221-42ec-a4b7-9671ddf95372 created: 2024-06-25T14:52:42.846260Z] Jun 25 14:54:39.893888 waagent[1781]: 2024-06-25T14:54:39.893833Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 14:54:39.895148 waagent[1781]: 2024-06-25T14:54:39.895102Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jun 25 14:54:39.913242 waagent[1781]: 2024-06-25T14:54:39.913170Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 14:54:39.913242 waagent[1781]: Executing ['ip', '-a', '-o', 'link']: Jun 25 14:54:39.913242 waagent[1781]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 14:54:39.913242 waagent[1781]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:75:4d brd ff:ff:ff:ff:ff:ff Jun 25 14:54:39.913242 waagent[1781]: 3: enP18199s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b8:75:4d brd ff:ff:ff:ff:ff:ff\ altname enP18199p0s2 Jun 25 14:54:39.913242 waagent[1781]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 14:54:39.913242 waagent[1781]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 14:54:39.913242 waagent[1781]: 2: eth0 inet 10.200.20.26/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 14:54:39.913242 waagent[1781]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 14:54:39.913242 waagent[1781]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jun 25 14:54:39.913242 waagent[1781]: 2: eth0 inet6 fe80::222:48ff:feb8:754d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 14:54:39.953121 waagent[1781]: 2024-06-25T14:54:39.953072Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 01581C8B-DAE8-4819-A652-3FFF56FDAB11;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 14:54:39.960539 waagent[1781]: 2024-06-25T14:54:39.960482Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 14:54:39.960539 waagent[1781]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:54:39.960539 waagent[1781]: pkts bytes target prot opt in out source destination Jun 25 14:54:39.960539 waagent[1781]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:54:39.960539 waagent[1781]: pkts bytes target prot opt in out source destination Jun 25 14:54:39.960539 waagent[1781]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:54:39.960539 waagent[1781]: pkts bytes target prot opt in out source destination Jun 25 14:54:39.960539 waagent[1781]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 14:54:39.960539 waagent[1781]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 14:54:39.960539 waagent[1781]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 14:54:39.964530 waagent[1781]: 2024-06-25T14:54:39.964480Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 14:54:39.964530 waagent[1781]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:54:39.964530 waagent[1781]: pkts bytes target prot opt in out source destination Jun 25 14:54:39.964530 waagent[1781]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:54:39.964530 waagent[1781]: pkts bytes target prot opt in out source destination Jun 25 14:54:39.964530 waagent[1781]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 14:54:39.964530 waagent[1781]: pkts bytes target prot opt in out source destination Jun 25 14:54:39.964530 waagent[1781]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 14:54:39.964530 waagent[1781]: 12 1214 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 14:54:39.964530 waagent[1781]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 14:54:39.965071 waagent[1781]: 2024-06-25T14:54:39.965040Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 14:54:43.763887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:54:43.764078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:43.771653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:54:43.858318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:43.903429 kubelet[1996]: E0625 14:54:43.903381 1996 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:54:43.906342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:54:43.906497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:54:54.157520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:54:54.157696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:54.165664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:54:54.253345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:54:54.303683 kubelet[2011]: E0625 14:54:54.303618 2011 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:54:54.305947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:54:54.306103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:55:04.460224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 14:55:04.460419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:04.467597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:55:04.548338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:04.604874 kubelet[2027]: E0625 14:55:04.604813 2027 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:55:04.607059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:55:04.607213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:55:08.964531 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 25 14:55:14.710195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 14:55:14.710398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:14.718609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:55:14.847630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:14.902124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:55:14.902281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:55:15.196080 kubelet[2042]: E0625 14:55:14.900272 2042 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/ Jun 25 14:55:15.196080 kubelet[2042]: lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:55:17.092604 update_engine[1580]: I0625 14:55:17.092546 1580 update_attempter.cc:509] Updating boot flags... Jun 25 14:55:17.174328 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2061) Jun 25 14:55:19.779495 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:55:19.787705 systemd[1]: Started sshd@0-10.200.20.26:22-10.200.16.10:44176.service - OpenSSH per-connection server daemon (10.200.16.10:44176). Jun 25 14:55:20.353933 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 44176 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:20.355270 sshd[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:20.359334 systemd-logind[1576]: New session 3 of user core. Jun 25 14:55:20.368556 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:55:20.760725 systemd[1]: Started sshd@1-10.200.20.26:22-10.200.16.10:44180.service - OpenSSH per-connection server daemon (10.200.16.10:44180). Jun 25 14:55:21.224967 sshd[2093]: Accepted publickey for core from 10.200.16.10 port 44180 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:21.226730 sshd[2093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:21.230936 systemd-logind[1576]: New session 4 of user core. Jun 25 14:55:21.242606 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:55:21.562097 sshd[2093]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:21.564923 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:55:21.565154 systemd[1]: sshd@1-10.200.20.26:22-10.200.16.10:44180.service: Deactivated successfully. Jun 25 14:55:21.565946 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:55:21.566390 systemd-logind[1576]: Removed session 4. Jun 25 14:55:21.643833 systemd[1]: Started sshd@2-10.200.20.26:22-10.200.16.10:44196.service - OpenSSH per-connection server daemon (10.200.16.10:44196). Jun 25 14:55:22.104540 sshd[2103]: Accepted publickey for core from 10.200.16.10 port 44196 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:22.106207 sshd[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:22.110183 systemd-logind[1576]: New session 5 of user core. Jun 25 14:55:22.119580 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:55:22.431470 sshd[2103]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:22.434424 systemd[1]: sshd@2-10.200.20.26:22-10.200.16.10:44196.service: Deactivated successfully. Jun 25 14:55:22.435150 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:55:22.436641 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:55:22.437854 systemd-logind[1576]: Removed session 5. Jun 25 14:55:22.514694 systemd[1]: Started sshd@3-10.200.20.26:22-10.200.16.10:44200.service - OpenSSH per-connection server daemon (10.200.16.10:44200). Jun 25 14:55:22.972950 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 44200 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:22.974644 sshd[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:22.978473 systemd-logind[1576]: New session 6 of user core. Jun 25 14:55:22.985572 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:55:23.305446 sshd[2110]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:23.308627 systemd[1]: sshd@3-10.200.20.26:22-10.200.16.10:44200.service: Deactivated successfully. Jun 25 14:55:23.309387 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:55:23.310534 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:55:23.311371 systemd-logind[1576]: Removed session 6. Jun 25 14:55:23.387692 systemd[1]: Started sshd@4-10.200.20.26:22-10.200.16.10:44202.service - OpenSSH per-connection server daemon (10.200.16.10:44202). Jun 25 14:55:23.812212 sshd[2117]: Accepted publickey for core from 10.200.16.10 port 44202 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:23.813905 sshd[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:23.818299 systemd-logind[1576]: New session 7 of user core. Jun 25 14:55:23.823555 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:55:24.363408 sudo[2121]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:55:24.364046 sudo[2121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:55:24.407470 sudo[2121]: pam_unix(sudo:session): session closed for user root Jun 25 14:55:24.475576 sshd[2117]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:24.479167 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:55:24.479792 systemd[1]: sshd@4-10.200.20.26:22-10.200.16.10:44202.service: Deactivated successfully. Jun 25 14:55:24.480590 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:55:24.481979 systemd-logind[1576]: Removed session 7. Jun 25 14:55:24.556727 systemd[1]: Started sshd@5-10.200.20.26:22-10.200.16.10:60404.service - OpenSSH per-connection server daemon (10.200.16.10:60404). Jun 25 14:55:24.933765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 14:55:24.934005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:24.941621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:55:25.016680 sshd[2125]: Accepted publickey for core from 10.200.16.10 port 60404 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:25.018627 sshd[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:25.032248 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:55:25.033473 systemd-logind[1576]: New session 8 of user core. Jun 25 14:55:25.039621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:25.090672 kubelet[2136]: E0625 14:55:25.090609 2136 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:55:25.092529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:55:25.092689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:55:25.280215 sudo[2145]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:55:25.280530 sudo[2145]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:55:25.443693 sudo[2145]: pam_unix(sudo:session): session closed for user root Jun 25 14:55:25.449274 sudo[2144]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:55:25.449977 sudo[2144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:55:25.462618 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:55:25.463000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:55:25.466513 auditctl[2148]: No rules Jun 25 14:55:25.468165 kernel: kauditd_printk_skb: 10 callbacks suppressed Jun 25 14:55:25.468275 kernel: audit: type=1305 audit(1719327325.463:147): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:55:25.471230 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:55:25.471564 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:55:25.473880 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:55:25.463000 audit[2148]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc2d4bc60 a2=420 a3=0 items=0 ppid=1 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:25.503297 kernel: audit: type=1300 audit(1719327325.463:147): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc2d4bc60 a2=420 a3=0 items=0 ppid=1 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:25.463000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:55:25.510413 kernel: audit: type=1327 audit(1719327325.463:147): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:55:25.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.522966 augenrules[2166]: No rules Jun 25 14:55:25.524277 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:55:25.527150 kernel: audit: type=1131 audit(1719327325.471:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.544223 kernel: audit: type=1130 audit(1719327325.524:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.545013 sudo[2144]: pam_unix(sudo:session): session closed for user root Jun 25 14:55:25.544000 audit[2144]: USER_END pid=2144 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.564477 kernel: audit: type=1106 audit(1719327325.544:150): pid=2144 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.544000 audit[2144]: CRED_DISP pid=2144 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.582074 kernel: audit: type=1104 audit(1719327325.544:151): pid=2144 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.632519 sshd[2125]: pam_unix(sshd:session): session closed for user core Jun 25 14:55:25.633000 audit[2125]: USER_END pid=2125 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:25.635511 systemd[1]: sshd@5-10.200.20.26:22-10.200.16.10:60404.service: Deactivated successfully. Jun 25 14:55:25.636753 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:55:25.658186 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:55:25.633000 audit[2125]: CRED_DISP pid=2125 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:25.676514 kernel: audit: type=1106 audit(1719327325.633:152): pid=2125 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:25.676637 kernel: audit: type=1104 audit(1719327325.633:153): pid=2125 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:25.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.26:22-10.200.16.10:60404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.694381 kernel: audit: type=1131 audit(1719327325.635:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.26:22-10.200.16.10:60404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:25.694468 systemd-logind[1576]: Removed session 8. Jun 25 14:55:25.715678 systemd[1]: Started sshd@6-10.200.20.26:22-10.200.16.10:60420.service - OpenSSH per-connection server daemon (10.200.16.10:60420). Jun 25 14:55:25.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.26:22-10.200.16.10:60420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:26.178000 audit[2173]: USER_ACCT pid=2173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:26.178545 sshd[2173]: Accepted publickey for core from 10.200.16.10 port 60420 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:55:26.179000 audit[2173]: CRED_ACQ pid=2173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:26.179000 audit[2173]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8247370 a2=3 a3=1 items=0 ppid=1 pid=2173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:26.179000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:55:26.180226 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:55:26.184412 systemd-logind[1576]: New session 9 of user core. Jun 25 14:55:26.193573 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:55:26.197000 audit[2173]: USER_START pid=2173 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:26.199000 audit[2176]: CRED_ACQ pid=2176 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:55:26.443000 audit[2177]: USER_ACCT pid=2177 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:55:26.444484 sudo[2177]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:55:26.444000 audit[2177]: CRED_REFR pid=2177 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:55:26.445113 sudo[2177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:55:26.446000 audit[2177]: USER_START pid=2177 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:55:27.045690 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:55:28.318547 dockerd[2186]: time="2024-06-25T14:55:28.318486697Z" level=info msg="Starting up" Jun 25 14:55:28.350903 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport937419213-merged.mount: Deactivated successfully. Jun 25 14:55:28.430961 systemd[1]: var-lib-docker-metacopy\x2dcheck3997367550-merged.mount: Deactivated successfully. Jun 25 14:55:28.458077 dockerd[2186]: time="2024-06-25T14:55:28.458039206Z" level=info msg="Loading containers: start." Jun 25 14:55:28.518000 audit[2215]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.518000 audit[2215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd352f5d0 a2=0 a3=1 items=0 ppid=2186 pid=2215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.518000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:55:28.520000 audit[2217]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.520000 audit[2217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff0a6dfc0 a2=0 a3=1 items=0 ppid=2186 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.520000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:55:28.522000 audit[2219]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.522000 audit[2219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffda5bc60 a2=0 a3=1 items=0 ppid=2186 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.522000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:55:28.524000 audit[2221]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.524000 audit[2221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe429d070 a2=0 a3=1 items=0 ppid=2186 pid=2221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.524000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:55:28.526000 audit[2223]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.526000 audit[2223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff5cf15d0 a2=0 a3=1 items=0 ppid=2186 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.526000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:55:28.528000 audit[2225]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.528000 audit[2225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc1d84590 a2=0 a3=1 items=0 ppid=2186 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.528000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:55:28.546000 audit[2227]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2227 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.546000 audit[2227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffffcf4560 a2=0 a3=1 items=0 ppid=2186 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.546000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:55:28.548000 audit[2229]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.548000 audit[2229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe31d9950 a2=0 a3=1 items=0 ppid=2186 pid=2229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.548000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:55:28.550000 audit[2231]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.550000 audit[2231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc82d99e0 a2=0 a3=1 items=0 ppid=2186 pid=2231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.550000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:55:28.579000 audit[2235]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.579000 audit[2235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffffb08160 a2=0 a3=1 items=0 ppid=2186 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.579000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:55:28.580000 audit[2236]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2236 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.580000 audit[2236]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcf219080 a2=0 a3=1 items=0 ppid=2186 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.580000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:55:28.688315 kernel: Initializing XFRM netlink socket Jun 25 14:55:28.820000 audit[2244]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.820000 audit[2244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffed2d8e80 a2=0 a3=1 items=0 ppid=2186 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.820000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:55:28.828000 audit[2247]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.828000 audit[2247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe88c2d60 a2=0 a3=1 items=0 ppid=2186 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.828000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:55:28.832000 audit[2251]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.832000 audit[2251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffed2a1370 a2=0 a3=1 items=0 ppid=2186 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.832000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:55:28.835000 audit[2253]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.835000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc493e160 a2=0 a3=1 items=0 ppid=2186 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.835000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:55:28.837000 audit[2255]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.837000 audit[2255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff7f24520 a2=0 a3=1 items=0 ppid=2186 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.837000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:55:28.838000 audit[2257]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.838000 audit[2257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffd599e680 a2=0 a3=1 items=0 ppid=2186 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.838000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:55:28.840000 audit[2259]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.840000 audit[2259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff00994e0 a2=0 a3=1 items=0 ppid=2186 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.840000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:55:28.842000 audit[2261]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.842000 audit[2261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffd0ed9e80 a2=0 a3=1 items=0 ppid=2186 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:55:28.844000 audit[2263]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.844000 audit[2263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc8c932b0 a2=0 a3=1 items=0 ppid=2186 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.844000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:55:28.846000 audit[2265]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.846000 audit[2265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff2892af0 a2=0 a3=1 items=0 ppid=2186 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.846000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:55:28.848000 audit[2267]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.848000 audit[2267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff5738300 a2=0 a3=1 items=0 ppid=2186 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.848000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:55:28.850190 systemd-networkd[1299]: docker0: Link UP Jun 25 14:55:28.866000 audit[2271]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.866000 audit[2271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe98af140 a2=0 a3=1 items=0 ppid=2186 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.866000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:55:28.867000 audit[2272]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2272 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:55:28.867000 audit[2272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe66b42b0 a2=0 a3=1 items=0 ppid=2186 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:55:28.867000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:55:28.869232 dockerd[2186]: time="2024-06-25T14:55:28.869205991Z" level=info msg="Loading containers: done." Jun 25 14:55:29.602929 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck482232083-merged.mount: Deactivated successfully. Jun 25 14:55:31.138458 dockerd[2186]: time="2024-06-25T14:55:31.138383827Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:55:31.138817 dockerd[2186]: time="2024-06-25T14:55:31.138638102Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:55:31.138817 dockerd[2186]: time="2024-06-25T14:55:31.138754380Z" level=info msg="Daemon has completed initialization" Jun 25 14:55:31.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:31.364755 dockerd[2186]: time="2024-06-25T14:55:31.363479133Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:55:31.363620 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:55:31.367793 kernel: kauditd_printk_skb: 83 callbacks suppressed Jun 25 14:55:31.367852 kernel: audit: type=1130 audit(1719327331.362:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:33.802008 containerd[1604]: time="2024-06-25T14:55:33.801962611Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 14:55:35.210119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 14:55:35.210326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:35.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:35.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:35.241887 kernel: audit: type=1130 audit(1719327335.209:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:35.241999 kernel: audit: type=1131 audit(1719327335.209:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:35.243675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:55:35.334344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:35.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:35.352368 kernel: audit: type=1130 audit(1719327335.333:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:35.394089 kubelet[2322]: E0625 14:55:35.394039 2322 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:55:35.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:55:35.396236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:55:35.396418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:55:35.414382 kernel: audit: type=1131 audit(1719327335.395:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:55:39.753174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170498258.mount: Deactivated successfully. Jun 25 14:55:45.460132 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 25 14:55:45.460331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:45.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:45.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:45.493334 kernel: audit: type=1130 audit(1719327345.459:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:45.493397 kernel: audit: type=1131 audit(1719327345.459:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:45.495640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:55:45.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:45.587945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:45.607318 kernel: audit: type=1130 audit(1719327345.586:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:45.648362 kubelet[2354]: E0625 14:55:45.648263 2354 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:55:45.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:55:45.650464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:55:45.650624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:55:45.668313 kernel: audit: type=1131 audit(1719327345.649:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:55:48.023976 containerd[1604]: time="2024-06-25T14:55:48.023754552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:48.027189 containerd[1604]: time="2024-06-25T14:55:48.027152395Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jun 25 14:55:48.029794 containerd[1604]: time="2024-06-25T14:55:48.029770512Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:48.034024 containerd[1604]: time="2024-06-25T14:55:48.033999204Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:48.037408 containerd[1604]: time="2024-06-25T14:55:48.037382407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:48.038463 containerd[1604]: time="2024-06-25T14:55:48.038434989Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 14.235975022s" Jun 25 14:55:48.038572 containerd[1604]: time="2024-06-25T14:55:48.038554437Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 14:55:48.059032 containerd[1604]: time="2024-06-25T14:55:48.058993898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 14:55:49.923917 containerd[1604]: time="2024-06-25T14:55:49.923863262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:49.926699 containerd[1604]: time="2024-06-25T14:55:49.926654905Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jun 25 14:55:49.931018 containerd[1604]: time="2024-06-25T14:55:49.930994637Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:49.935486 containerd[1604]: time="2024-06-25T14:55:49.935439856Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:49.940824 containerd[1604]: time="2024-06-25T14:55:49.940794928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:49.941883 containerd[1604]: time="2024-06-25T14:55:49.941839789Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.882647838s" Jun 25 14:55:49.941948 containerd[1604]: time="2024-06-25T14:55:49.941879751Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 14:55:49.961523 containerd[1604]: time="2024-06-25T14:55:49.961468651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 14:55:50.993162 containerd[1604]: time="2024-06-25T14:55:50.993109807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:50.995131 containerd[1604]: time="2024-06-25T14:55:50.995099959Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jun 25 14:55:51.000436 containerd[1604]: time="2024-06-25T14:55:51.000399340Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:51.006251 containerd[1604]: time="2024-06-25T14:55:51.006209701Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:51.012450 containerd[1604]: time="2024-06-25T14:55:51.012420804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:51.013472 containerd[1604]: time="2024-06-25T14:55:51.013428659Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.051908605s" Jun 25 14:55:51.013536 containerd[1604]: time="2024-06-25T14:55:51.013473262Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 14:55:51.033534 containerd[1604]: time="2024-06-25T14:55:51.033487687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 14:55:52.155130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67144888.mount: Deactivated successfully. Jun 25 14:55:52.784212 containerd[1604]: time="2024-06-25T14:55:52.784156782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:52.787140 containerd[1604]: time="2024-06-25T14:55:52.787102941Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jun 25 14:55:52.790926 containerd[1604]: time="2024-06-25T14:55:52.790896905Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:52.795036 containerd[1604]: time="2024-06-25T14:55:52.795001366Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:52.798852 containerd[1604]: time="2024-06-25T14:55:52.798823491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:52.799403 containerd[1604]: time="2024-06-25T14:55:52.799369601Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.765680302s" Jun 25 14:55:52.799463 containerd[1604]: time="2024-06-25T14:55:52.799405843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 14:55:52.817694 containerd[1604]: time="2024-06-25T14:55:52.817641064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:55:53.415967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397304205.mount: Deactivated successfully. Jun 25 14:55:53.456628 containerd[1604]: time="2024-06-25T14:55:53.456582868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:53.459828 containerd[1604]: time="2024-06-25T14:55:53.459796757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 14:55:53.464865 containerd[1604]: time="2024-06-25T14:55:53.464839781Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:53.470661 containerd[1604]: time="2024-06-25T14:55:53.470634765Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:53.476943 containerd[1604]: time="2024-06-25T14:55:53.476909894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:53.477515 containerd[1604]: time="2024-06-25T14:55:53.477475324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 659.781337ms" Jun 25 14:55:53.477515 containerd[1604]: time="2024-06-25T14:55:53.477511485Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:55:53.500072 containerd[1604]: time="2024-06-25T14:55:53.500031306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 14:55:54.162410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1327699.mount: Deactivated successfully. Jun 25 14:55:55.710188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 25 14:55:55.710397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:55.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:55.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:55.742890 kernel: audit: type=1130 audit(1719327355.709:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:55.743009 kernel: audit: type=1131 audit(1719327355.709:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:55.743626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:55:57.753030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:55:57.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:57.773489 kernel: audit: type=1130 audit(1719327357.751:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:55:57.813643 kubelet[2486]: E0625 14:55:57.813583 2486 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:55:57.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:55:57.815776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:55:57.815936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:55:57.834349 kernel: audit: type=1131 audit(1719327357.814:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:55:58.998986 containerd[1604]: time="2024-06-25T14:55:58.998930392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:59.001481 containerd[1604]: time="2024-06-25T14:55:59.001430628Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jun 25 14:55:59.005388 containerd[1604]: time="2024-06-25T14:55:59.005356044Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:59.011606 containerd[1604]: time="2024-06-25T14:55:59.011567244Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:59.017686 containerd[1604]: time="2024-06-25T14:55:59.017645998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:55:59.019127 containerd[1604]: time="2024-06-25T14:55:59.019087663Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 5.519015794s" Jun 25 14:55:59.019127 containerd[1604]: time="2024-06-25T14:55:59.019124624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 14:55:59.039341 containerd[1604]: time="2024-06-25T14:55:59.039301613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 14:55:59.718972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309037616.mount: Deactivated successfully. Jun 25 14:56:00.125579 containerd[1604]: time="2024-06-25T14:56:00.125264297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:00.128345 containerd[1604]: time="2024-06-25T14:56:00.128314471Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jun 25 14:56:00.133481 containerd[1604]: time="2024-06-25T14:56:00.133456496Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:00.194987 containerd[1604]: time="2024-06-25T14:56:00.194938237Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:00.199456 containerd[1604]: time="2024-06-25T14:56:00.199426434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:00.201174 containerd[1604]: time="2024-06-25T14:56:00.201144390Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.161642247s" Jun 25 14:56:00.201324 containerd[1604]: time="2024-06-25T14:56:00.201303997Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 14:56:05.868841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:56:05.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:05.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:05.887830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:56:05.901674 kernel: audit: type=1130 audit(1719327365.867:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:05.901773 kernel: audit: type=1131 audit(1719327365.867:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:05.917657 systemd[1]: Reloading. Jun 25 14:56:06.116153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:56:06.210397 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 14:56:06.210626 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 14:56:06.211092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:56:06.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:56:06.229308 kernel: audit: type=1130 audit(1719327366.209:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:56:06.232972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:56:09.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:09.051634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:56:09.069322 kernel: audit: type=1130 audit(1719327369.050:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:09.106874 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:56:09.107257 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:56:09.107335 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:56:09.107492 kubelet[2670]: I0625 14:56:09.107458 2670 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:56:09.874611 kubelet[2670]: I0625 14:56:09.874580 2670 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:56:09.874611 kubelet[2670]: I0625 14:56:09.874607 2670 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:56:09.874831 kubelet[2670]: I0625 14:56:09.874814 2670 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:56:09.894043 kubelet[2670]: I0625 14:56:09.894013 2670 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:56:09.894267 kubelet[2670]: E0625 14:56:09.894020 2670 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:09.902399 kubelet[2670]: W0625 14:56:09.902377 2670 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:56:09.903078 kubelet[2670]: I0625 14:56:09.903055 2670 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:56:09.903530 kubelet[2670]: I0625 14:56:09.903518 2670 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:56:09.903772 kubelet[2670]: I0625 14:56:09.903758 2670 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:56:09.903916 kubelet[2670]: I0625 14:56:09.903904 2670 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:56:09.903978 kubelet[2670]: I0625 14:56:09.903969 2670 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:56:09.904131 kubelet[2670]: I0625 14:56:09.904121 2670 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:56:09.905220 kubelet[2670]: I0625 14:56:09.905208 2670 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:56:09.905750 kubelet[2670]: I0625 14:56:09.905737 2670 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:56:09.905864 kubelet[2670]: I0625 14:56:09.905854 2670 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:56:09.905944 kubelet[2670]: I0625 14:56:09.905936 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:56:09.906887 kubelet[2670]: W0625 14:56:09.905687 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-2c7c8223bb&limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:09.907001 kubelet[2670]: E0625 14:56:09.906990 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-2c7c8223bb&limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:09.907861 kubelet[2670]: W0625 14:56:09.907833 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:09.907970 kubelet[2670]: E0625 14:56:09.907960 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:09.908113 kubelet[2670]: I0625 14:56:09.908101 2670 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:56:09.911902 kubelet[2670]: W0625 14:56:09.911887 2670 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:56:09.912771 kubelet[2670]: I0625 14:56:09.912758 2670 server.go:1232] "Started kubelet" Jun 25 14:56:09.914069 kubelet[2670]: I0625 14:56:09.914050 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:56:09.915529 kubelet[2670]: E0625 14:56:09.915508 2670 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:56:09.915589 kubelet[2670]: E0625 14:56:09.915536 2670 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:56:09.916502 kubelet[2670]: E0625 14:56:09.916402 2670 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815.2.4-a-2c7c8223bb.17dc47223041444f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815.2.4-a-2c7c8223bb", UID:"ci-3815.2.4-a-2c7c8223bb", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-2c7c8223bb"}, FirstTimestamp:time.Date(2024, time.June, 25, 14, 56, 9, 912730703, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 14, 56, 9, 912730703, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-2c7c8223bb"}': 'Post "https://10.200.20.26:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.26:6443: connect: connection refused'(may retry after sleeping) Jun 25 14:56:09.917524 kubelet[2670]: I0625 14:56:09.917501 2670 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:56:09.918204 kubelet[2670]: I0625 14:56:09.918162 2670 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:56:09.917000 audit[2680]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:09.930754 kubelet[2670]: I0625 14:56:09.919522 2670 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:56:09.930754 kubelet[2670]: I0625 14:56:09.919708 2670 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:56:09.930754 kubelet[2670]: I0625 14:56:09.924657 2670 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:56:09.930754 kubelet[2670]: I0625 14:56:09.924788 2670 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:56:09.930754 kubelet[2670]: I0625 14:56:09.924877 2670 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:56:09.930754 kubelet[2670]: W0625 14:56:09.925238 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:09.930754 kubelet[2670]: E0625 14:56:09.925310 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:09.930754 kubelet[2670]: E0625 14:56:09.926116 2670 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-2c7c8223bb?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="200ms" Jun 25 14:56:09.917000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9b01770 a2=0 a3=1 items=0 ppid=2670 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:09.954620 kernel: audit: type=1325 audit(1719327369.917:205): table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:09.954732 kernel: audit: type=1300 audit(1719327369.917:205): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9b01770 a2=0 a3=1 items=0 ppid=2670 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:09.917000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:56:09.967963 kernel: audit: type=1327 audit(1719327369.917:205): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:56:09.979000 audit[2683]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:09.979000 audit[2683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce580330 a2=0 a3=1 items=0 ppid=2670 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:10.016630 kernel: audit: type=1325 audit(1719327369.979:206): table=filter:30 family=2 entries=1 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:10.016755 kernel: audit: type=1300 audit(1719327369.979:206): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce580330 a2=0 a3=1 items=0 ppid=2670 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:09.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:56:10.030351 kernel: audit: type=1327 audit(1719327369.979:206): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:56:10.016000 audit[2686]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:10.016000 audit[2686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdc26f830 a2=0 a3=1 items=0 ppid=2670 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:10.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:56:10.020000 audit[2688]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:10.020000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffa6d9a90 a2=0 a3=1 items=0 ppid=2670 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:10.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:56:10.054787 kubelet[2670]: I0625 14:56:10.054765 2670 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:10.055351 kubelet[2670]: E0625 14:56:10.055326 2670 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:10.056020 kubelet[2670]: I0625 14:56:10.055993 2670 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:56:10.056020 kubelet[2670]: I0625 14:56:10.056018 2670 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:56:10.056153 kubelet[2670]: I0625 14:56:10.056036 2670 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:56:10.127632 kubelet[2670]: E0625 14:56:10.127019 2670 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-2c7c8223bb?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="400ms" Jun 25 14:56:10.257727 kubelet[2670]: I0625 14:56:10.257697 2670 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:10.258025 kubelet[2670]: E0625 14:56:10.258010 2670 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.785996 kernel: kauditd_printk_skb: 6 callbacks suppressed Jun 25 14:56:12.786090 kernel: audit: type=1325 audit(1719327372.493:209): table=filter:33 family=2 entries=1 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:12.786112 kernel: audit: type=1300 audit(1719327372.493:209): arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe35b16b0 a2=0 a3=1 items=0 ppid=2670 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.786130 kernel: audit: type=1327 audit(1719327372.493:209): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:56:12.786149 kernel: audit: type=1325 audit(1719327372.493:210): table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:12.786165 kernel: audit: type=1300 audit(1719327372.493:210): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdc495e40 a2=0 a3=1 items=0 ppid=2670 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.786183 kernel: audit: type=1327 audit(1719327372.493:210): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:56:12.786201 kernel: audit: type=1325 audit(1719327372.493:211): table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:12.786222 kernel: audit: type=1300 audit(1719327372.493:211): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfc6c700 a2=0 a3=1 items=0 ppid=2670 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.786240 kernel: audit: type=1327 audit(1719327372.493:211): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:56:12.786257 kernel: audit: type=1325 audit(1719327372.497:212): table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:12.493000 audit[2692]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:12.493000 audit[2692]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe35b16b0 a2=0 a3=1 items=0 ppid=2670 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:56:12.493000 audit[2694]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2694 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:12.493000 audit[2694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdc495e40 a2=0 a3=1 items=0 ppid=2670 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:56:12.493000 audit[2695]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:12.493000 audit[2695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfc6c700 a2=0 a3=1 items=0 ppid=2670 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:56:12.497000 audit[2696]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:12.497000 audit[2696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcfe9a2e0 a2=0 a3=1 items=0 ppid=2670 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:56:12.497000 audit[2697]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:12.497000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd92c9c70 a2=0 a3=1 items=0 ppid=2670 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:56:12.497000 audit[2698]: NETFILTER_CFG table=nat:38 family=10 entries=2 op=nft_register_chain pid=2698 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:12.497000 audit[2698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffd4325430 a2=0 a3=1 items=0 ppid=2670 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:56:12.497000 audit[2699]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:12.497000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc58c3150 a2=0 a3=1 items=0 ppid=2670 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:56:12.497000 audit[2700]: NETFILTER_CFG table=filter:40 family=10 entries=2 op=nft_register_chain pid=2700 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:12.497000 audit[2700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd8a92d00 a2=0 a3=1 items=0 ppid=2670 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:12.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:56:12.787099 kubelet[2670]: E0625 14:56:10.528741 2670 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-2c7c8223bb?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="800ms" Jun 25 14:56:12.787099 kubelet[2670]: I0625 14:56:10.660462 2670 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.787099 kubelet[2670]: E0625 14:56:10.660731 2670 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.787099 kubelet[2670]: W0625 14:56:10.865886 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787099 kubelet[2670]: E0625 14:56:10.865945 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787099 kubelet[2670]: W0625 14:56:11.262145 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787099 kubelet[2670]: E0625 14:56:11.262181 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787484 kubelet[2670]: E0625 14:56:11.329613 2670 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-2c7c8223bb?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="1.6s" Jun 25 14:56:12.787484 kubelet[2670]: W0625 14:56:11.344056 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-2c7c8223bb&limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787484 kubelet[2670]: E0625 14:56:11.344103 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-2c7c8223bb&limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787484 kubelet[2670]: I0625 14:56:11.462911 2670 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.787484 kubelet[2670]: E0625 14:56:11.463214 2670 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.787484 kubelet[2670]: E0625 14:56:11.994496 2670 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787484 kubelet[2670]: W0625 14:56:12.482457 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787633 kubelet[2670]: E0625 14:56:12.482488 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787633 kubelet[2670]: I0625 14:56:12.494868 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:56:12.787633 kubelet[2670]: I0625 14:56:12.496421 2670 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:56:12.787633 kubelet[2670]: I0625 14:56:12.496440 2670 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:56:12.787633 kubelet[2670]: I0625 14:56:12.496463 2670 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:56:12.787633 kubelet[2670]: E0625 14:56:12.496521 2670 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:56:12.787633 kubelet[2670]: W0625 14:56:12.498076 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787633 kubelet[2670]: E0625 14:56:12.498111 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:12.787633 kubelet[2670]: E0625 14:56:12.596856 2670 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 14:56:12.789169 kubelet[2670]: I0625 14:56:12.789148 2670 policy_none.go:49] "None policy: Start" Jun 25 14:56:12.790064 kubelet[2670]: I0625 14:56:12.790038 2670 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:56:12.790114 kubelet[2670]: I0625 14:56:12.790078 2670 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:56:12.797681 kubelet[2670]: I0625 14:56:12.797650 2670 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:56:12.797909 kubelet[2670]: I0625 14:56:12.797885 2670 topology_manager.go:215] "Topology Admit Handler" podUID="87a9f4b3e81f279ccdd81b531fad9666" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.800000 kubelet[2670]: I0625 14:56:12.799973 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:56:12.800753 kubelet[2670]: E0625 14:56:12.800732 2670 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.4-a-2c7c8223bb\" not found" Jun 25 14:56:12.800919 kubelet[2670]: I0625 14:56:12.800898 2670 topology_manager.go:215] "Topology Admit Handler" podUID="03ab144c7a2b275235aadba34ec9763f" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.804848 kubelet[2670]: I0625 14:56:12.804824 2670 topology_manager.go:215] "Topology Admit Handler" podUID="6cbef7239464e167901683920f3b8409" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837069 kubelet[2670]: I0625 14:56:12.837031 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837279 kubelet[2670]: I0625 14:56:12.837268 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837412 kubelet[2670]: I0625 14:56:12.837401 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03ab144c7a2b275235aadba34ec9763f-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-2c7c8223bb\" (UID: \"03ab144c7a2b275235aadba34ec9763f\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837521 kubelet[2670]: I0625 14:56:12.837510 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03ab144c7a2b275235aadba34ec9763f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-2c7c8223bb\" (UID: \"03ab144c7a2b275235aadba34ec9763f\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837624 kubelet[2670]: I0625 14:56:12.837614 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837730 kubelet[2670]: I0625 14:56:12.837720 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837827 kubelet[2670]: I0625 14:56:12.837817 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87a9f4b3e81f279ccdd81b531fad9666-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-2c7c8223bb\" (UID: \"87a9f4b3e81f279ccdd81b531fad9666\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.837926 kubelet[2670]: I0625 14:56:12.837915 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03ab144c7a2b275235aadba34ec9763f-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-2c7c8223bb\" (UID: \"03ab144c7a2b275235aadba34ec9763f\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.838028 kubelet[2670]: I0625 14:56:12.838018 2670 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:12.930106 kubelet[2670]: E0625 14:56:12.930084 2670 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-2c7c8223bb?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="3.2s" Jun 25 14:56:13.065623 kubelet[2670]: I0625 14:56:13.064994 2670 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:13.065623 kubelet[2670]: E0625 14:56:13.065313 2670 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:13.109385 containerd[1604]: time="2024-06-25T14:56:13.109337946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-2c7c8223bb,Uid:87a9f4b3e81f279ccdd81b531fad9666,Namespace:kube-system,Attempt:0,}" Jun 25 14:56:13.113386 containerd[1604]: time="2024-06-25T14:56:13.113331755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-2c7c8223bb,Uid:03ab144c7a2b275235aadba34ec9763f,Namespace:kube-system,Attempt:0,}" Jun 25 14:56:13.113989 containerd[1604]: time="2024-06-25T14:56:13.113781770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-2c7c8223bb,Uid:6cbef7239464e167901683920f3b8409,Namespace:kube-system,Attempt:0,}" Jun 25 14:56:13.496537 kubelet[2670]: W0625 14:56:13.365562 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:13.496537 kubelet[2670]: E0625 14:56:13.365605 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:13.496697 kubelet[2670]: E0625 14:56:13.443198 2670 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3815.2.4-a-2c7c8223bb.17dc47223041444f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3815.2.4-a-2c7c8223bb", UID:"ci-3815.2.4-a-2c7c8223bb", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-2c7c8223bb"}, FirstTimestamp:time.Date(2024, time.June, 25, 14, 56, 9, 912730703, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 14, 56, 9, 912730703, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-2c7c8223bb"}': 'Post "https://10.200.20.26:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.26:6443: connect: connection refused'(may retry after sleeping) Jun 25 14:56:13.900370 kubelet[2670]: W0625 14:56:13.900216 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:13.900370 kubelet[2670]: E0625 14:56:13.900267 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:14.437403 kubelet[2670]: W0625 14:56:14.437367 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-2c7c8223bb&limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:14.437403 kubelet[2670]: E0625 14:56:14.437407 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.4-a-2c7c8223bb&limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:15.038365 kubelet[2670]: W0625 14:56:15.038281 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:15.038365 kubelet[2670]: E0625 14:56:15.038345 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:16.046037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631564036.mount: Deactivated successfully. Jun 25 14:56:16.130802 kubelet[2670]: E0625 14:56:16.130768 2670 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-2c7c8223bb?timeout=10s\": dial tcp 10.200.20.26:6443: connect: connection refused" interval="6.4s" Jun 25 14:56:16.267146 kubelet[2670]: I0625 14:56:16.267063 2670 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:16.267457 kubelet[2670]: E0625 14:56:16.267434 2670 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.26:6443/api/v1/nodes\": dial tcp 10.200.20.26:6443: connect: connection refused" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:16.336391 containerd[1604]: time="2024-06-25T14:56:16.336260544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:16.362732 kubelet[2670]: E0625 14:56:16.362698 2670 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:16.687481 containerd[1604]: time="2024-06-25T14:56:16.687362188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 14:56:16.690588 containerd[1604]: time="2024-06-25T14:56:16.690547045Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:16.744761 containerd[1604]: time="2024-06-25T14:56:16.744722527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:56:16.791156 containerd[1604]: time="2024-06-25T14:56:16.791110174Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:16.839115 containerd[1604]: time="2024-06-25T14:56:16.839070108Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:16.886345 containerd[1604]: time="2024-06-25T14:56:16.886307540Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:16.949109 containerd[1604]: time="2024-06-25T14:56:16.948989320Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:16.952312 containerd[1604]: time="2024-06-25T14:56:16.952263379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:56:17.000248 containerd[1604]: time="2024-06-25T14:56:17.000148271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:17.001509 containerd[1604]: time="2024-06-25T14:56:17.001462670Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 3.8920024s" Jun 25 14:56:17.045219 containerd[1604]: time="2024-06-25T14:56:17.045087645Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:17.092443 containerd[1604]: time="2024-06-25T14:56:17.092404529Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:17.139915 kubelet[2670]: W0625 14:56:17.139876 2670 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:17.139915 kubelet[2670]: E0625 14:56:17.139915 2670 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.26:6443: connect: connection refused Jun 25 14:56:17.258682 containerd[1604]: time="2024-06-25T14:56:17.258632181Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:17.265966 containerd[1604]: time="2024-06-25T14:56:17.265911717Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:17.266843 containerd[1604]: time="2024-06-25T14:56:17.266805184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 4.153399506s" Jun 25 14:56:17.276722 containerd[1604]: time="2024-06-25T14:56:17.276675157Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:56:17.277562 containerd[1604]: time="2024-06-25T14:56:17.277529462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 4.163681971s" Jun 25 14:56:17.458306 containerd[1604]: time="2024-06-25T14:56:17.458044818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:56:17.458306 containerd[1604]: time="2024-06-25T14:56:17.458111940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:17.458306 containerd[1604]: time="2024-06-25T14:56:17.458131861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:56:17.458306 containerd[1604]: time="2024-06-25T14:56:17.458146581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:17.459053 containerd[1604]: time="2024-06-25T14:56:17.458570154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:56:17.459053 containerd[1604]: time="2024-06-25T14:56:17.458607755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:17.459053 containerd[1604]: time="2024-06-25T14:56:17.458622636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:56:17.459053 containerd[1604]: time="2024-06-25T14:56:17.458632956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:17.460326 containerd[1604]: time="2024-06-25T14:56:17.459854112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:56:17.460326 containerd[1604]: time="2024-06-25T14:56:17.459899113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:17.460326 containerd[1604]: time="2024-06-25T14:56:17.459918554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:56:17.460326 containerd[1604]: time="2024-06-25T14:56:17.459932954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:17.527251 containerd[1604]: time="2024-06-25T14:56:17.525239692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.4-a-2c7c8223bb,Uid:03ab144c7a2b275235aadba34ec9763f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e8cfa2f34aa89990a48ecba39d403493bdb94dd3b04af55c27355aafe79d2d4\"" Jun 25 14:56:17.530083 containerd[1604]: time="2024-06-25T14:56:17.530018034Z" level=info msg="CreateContainer within sandbox \"0e8cfa2f34aa89990a48ecba39d403493bdb94dd3b04af55c27355aafe79d2d4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:56:17.545855 containerd[1604]: time="2024-06-25T14:56:17.545815663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.4-a-2c7c8223bb,Uid:6cbef7239464e167901683920f3b8409,Namespace:kube-system,Attempt:0,} returns sandbox id \"b48b977844deee89cd16a6856e40853d39399df977acb6f6d9cbdc862620e171\"" Jun 25 14:56:17.548695 containerd[1604]: time="2024-06-25T14:56:17.548656067Z" level=info msg="CreateContainer within sandbox \"b48b977844deee89cd16a6856e40853d39399df977acb6f6d9cbdc862620e171\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:56:17.552646 containerd[1604]: time="2024-06-25T14:56:17.552593424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.4-a-2c7c8223bb,Uid:87a9f4b3e81f279ccdd81b531fad9666,Namespace:kube-system,Attempt:0,} returns sandbox id \"422822c6866a635bb2a24a2d0dbd727788cdf00f6b5e4a3a2760cbb802457951\"" Jun 25 14:56:17.555328 containerd[1604]: time="2024-06-25T14:56:17.555269783Z" level=info msg="CreateContainer within sandbox \"422822c6866a635bb2a24a2d0dbd727788cdf00f6b5e4a3a2760cbb802457951\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:56:17.586659 containerd[1604]: time="2024-06-25T14:56:17.586603753Z" level=info msg="CreateContainer within sandbox \"0e8cfa2f34aa89990a48ecba39d403493bdb94dd3b04af55c27355aafe79d2d4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b80b5fa0d96327768f286a5fcf177b63eb693bafafb8956e68193efafb0271d8\"" Jun 25 14:56:17.587383 containerd[1604]: time="2024-06-25T14:56:17.587355615Z" level=info msg="StartContainer for \"b80b5fa0d96327768f286a5fcf177b63eb693bafafb8956e68193efafb0271d8\"" Jun 25 14:56:17.619236 containerd[1604]: time="2024-06-25T14:56:17.616576403Z" level=info msg="CreateContainer within sandbox \"b48b977844deee89cd16a6856e40853d39399df977acb6f6d9cbdc862620e171\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"13e7f1820ad12d65101e2d75690f4932b194ed721d74e068d5e82ccdc2a6fa42\"" Jun 25 14:56:17.619236 containerd[1604]: time="2024-06-25T14:56:17.617152660Z" level=info msg="StartContainer for \"13e7f1820ad12d65101e2d75690f4932b194ed721d74e068d5e82ccdc2a6fa42\"" Jun 25 14:56:17.622523 containerd[1604]: time="2024-06-25T14:56:17.622471017Z" level=info msg="CreateContainer within sandbox \"422822c6866a635bb2a24a2d0dbd727788cdf00f6b5e4a3a2760cbb802457951\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f57b3d40949d8c1eef6865aca3b29025d8dbce6684671ff66266ecd2ac17292\"" Jun 25 14:56:17.623079 containerd[1604]: time="2024-06-25T14:56:17.623017554Z" level=info msg="StartContainer for \"4f57b3d40949d8c1eef6865aca3b29025d8dbce6684671ff66266ecd2ac17292\"" Jun 25 14:56:17.670853 containerd[1604]: time="2024-06-25T14:56:17.670794451Z" level=info msg="StartContainer for \"b80b5fa0d96327768f286a5fcf177b63eb693bafafb8956e68193efafb0271d8\" returns successfully" Jun 25 14:56:17.707223 containerd[1604]: time="2024-06-25T14:56:17.707167011Z" level=info msg="StartContainer for \"13e7f1820ad12d65101e2d75690f4932b194ed721d74e068d5e82ccdc2a6fa42\" returns successfully" Jun 25 14:56:17.727005 containerd[1604]: time="2024-06-25T14:56:17.726954478Z" level=info msg="StartContainer for \"4f57b3d40949d8c1eef6865aca3b29025d8dbce6684671ff66266ecd2ac17292\" returns successfully" Jun 25 14:56:20.456714 kubelet[2670]: E0625 14:56:20.456687 2670 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3815.2.4-a-2c7c8223bb" not found Jun 25 14:56:20.827822 kubelet[2670]: E0625 14:56:20.827773 2670 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3815.2.4-a-2c7c8223bb" not found Jun 25 14:56:20.914257 kubelet[2670]: I0625 14:56:20.914192 2670 apiserver.go:52] "Watching apiserver" Jun 25 14:56:20.925346 kubelet[2670]: I0625 14:56:20.925309 2670 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:56:21.280179 kubelet[2670]: E0625 14:56:21.280149 2670 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3815.2.4-a-2c7c8223bb" not found Jun 25 14:56:22.182945 kubelet[2670]: E0625 14:56:22.182911 2670 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3815.2.4-a-2c7c8223bb" not found Jun 25 14:56:22.535970 kubelet[2670]: E0625 14:56:22.535939 2670 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3815.2.4-a-2c7c8223bb\" not found" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:22.670165 kubelet[2670]: I0625 14:56:22.670140 2670 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:22.674922 kubelet[2670]: I0625 14:56:22.674886 2670 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:23.965605 systemd[1]: Reloading. Jun 25 14:56:24.155129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:56:24.241563 kubelet[2670]: I0625 14:56:24.241462 2670 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:56:24.241646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:56:24.260758 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:56:24.261110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:56:24.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:24.265266 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 14:56:24.265371 kernel: audit: type=1131 audit(1719327384.259:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:24.286696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:56:24.380766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:56:24.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:24.407370 kernel: audit: type=1130 audit(1719327384.380:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:24.459605 kubelet[3039]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:56:24.459993 kubelet[3039]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:56:24.460041 kubelet[3039]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:56:24.460184 kubelet[3039]: I0625 14:56:24.460144 3039 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:56:24.464849 kubelet[3039]: I0625 14:56:24.464810 3039 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:56:24.464849 kubelet[3039]: I0625 14:56:24.464841 3039 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:56:24.465082 kubelet[3039]: I0625 14:56:24.465059 3039 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:56:24.466756 kubelet[3039]: I0625 14:56:24.466734 3039 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:56:24.467949 kubelet[3039]: I0625 14:56:24.467927 3039 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:56:24.473829 kubelet[3039]: W0625 14:56:24.473805 3039 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:56:24.474786 kubelet[3039]: I0625 14:56:24.474767 3039 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:56:24.475437 kubelet[3039]: I0625 14:56:24.475422 3039 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:56:24.475717 kubelet[3039]: I0625 14:56:24.475692 3039 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:56:24.475864 kubelet[3039]: I0625 14:56:24.475851 3039 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:56:24.475930 kubelet[3039]: I0625 14:56:24.475920 3039 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:56:24.476028 kubelet[3039]: I0625 14:56:24.476018 3039 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:56:24.476199 kubelet[3039]: I0625 14:56:24.476186 3039 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:56:24.476279 kubelet[3039]: I0625 14:56:24.476269 3039 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:56:24.476425 kubelet[3039]: I0625 14:56:24.476413 3039 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:56:24.476499 kubelet[3039]: I0625 14:56:24.476490 3039 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:56:24.479186 kubelet[3039]: I0625 14:56:24.479167 3039 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:56:24.480631 kubelet[3039]: I0625 14:56:24.480611 3039 server.go:1232] "Started kubelet" Jun 25 14:56:24.486416 kubelet[3039]: I0625 14:56:24.486389 3039 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:56:24.493789 kubelet[3039]: I0625 14:56:24.492822 3039 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:56:24.494994 kubelet[3039]: I0625 14:56:24.494953 3039 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:56:24.496310 kubelet[3039]: I0625 14:56:24.496269 3039 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:56:24.496584 kubelet[3039]: I0625 14:56:24.496570 3039 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:56:24.498256 kubelet[3039]: I0625 14:56:24.498234 3039 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:56:24.500408 kubelet[3039]: E0625 14:56:24.500387 3039 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:56:24.500533 kubelet[3039]: E0625 14:56:24.500521 3039 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:56:24.501492 kubelet[3039]: I0625 14:56:24.501472 3039 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:56:24.501723 kubelet[3039]: I0625 14:56:24.501706 3039 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:56:24.504671 kubelet[3039]: I0625 14:56:24.504652 3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:56:24.505770 kubelet[3039]: I0625 14:56:24.505753 3039 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:56:24.505892 kubelet[3039]: I0625 14:56:24.505881 3039 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:56:24.505964 kubelet[3039]: I0625 14:56:24.505954 3039 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:56:24.506080 kubelet[3039]: E0625 14:56:24.506070 3039 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:56:24.604734 kubelet[3039]: I0625 14:56:24.604700 3039 kubelet_node_status.go:70] "Attempting to register node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.608627 kubelet[3039]: E0625 14:56:24.606875 3039 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 14:56:24.625217 kubelet[3039]: I0625 14:56:24.625192 3039 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:56:24.625475 kubelet[3039]: I0625 14:56:24.625462 3039 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:56:24.625563 kubelet[3039]: I0625 14:56:24.625553 3039 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:56:24.625819 kubelet[3039]: I0625 14:56:24.625808 3039 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:56:24.625967 kubelet[3039]: I0625 14:56:24.625955 3039 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:56:24.626046 kubelet[3039]: I0625 14:56:24.626036 3039 policy_none.go:49] "None policy: Start" Jun 25 14:56:24.627946 kubelet[3039]: I0625 14:56:24.627917 3039 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:56:24.627946 kubelet[3039]: I0625 14:56:24.627956 3039 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:56:24.628220 kubelet[3039]: I0625 14:56:24.628201 3039 state_mem.go:75] "Updated machine memory state" Jun 25 14:56:24.629875 kubelet[3039]: I0625 14:56:24.629528 3039 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:56:24.629875 kubelet[3039]: I0625 14:56:24.629757 3039 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:56:24.640846 kubelet[3039]: I0625 14:56:24.640721 3039 kubelet_node_status.go:108] "Node was previously registered" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.642531 kubelet[3039]: I0625 14:56:24.640869 3039 kubelet_node_status.go:73] "Successfully registered node" node="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.807788 kubelet[3039]: I0625 14:56:24.807683 3039 topology_manager.go:215] "Topology Admit Handler" podUID="03ab144c7a2b275235aadba34ec9763f" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.808082 kubelet[3039]: I0625 14:56:24.808065 3039 topology_manager.go:215] "Topology Admit Handler" podUID="6cbef7239464e167901683920f3b8409" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.808204 kubelet[3039]: I0625 14:56:24.808191 3039 topology_manager.go:215] "Topology Admit Handler" podUID="87a9f4b3e81f279ccdd81b531fad9666" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.815615 kubelet[3039]: W0625 14:56:24.815587 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:56:24.818751 kubelet[3039]: W0625 14:56:24.818722 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:56:24.819024 kubelet[3039]: W0625 14:56:24.819011 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:56:24.904482 kubelet[3039]: I0625 14:56:24.904446 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87a9f4b3e81f279ccdd81b531fad9666-kubeconfig\") pod \"kube-scheduler-ci-3815.2.4-a-2c7c8223bb\" (UID: \"87a9f4b3e81f279ccdd81b531fad9666\") " pod="kube-system/kube-scheduler-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904482 kubelet[3039]: I0625 14:56:24.904492 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03ab144c7a2b275235aadba34ec9763f-ca-certs\") pod \"kube-apiserver-ci-3815.2.4-a-2c7c8223bb\" (UID: \"03ab144c7a2b275235aadba34ec9763f\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904679 kubelet[3039]: I0625 14:56:24.904515 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-ca-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904679 kubelet[3039]: I0625 14:56:24.904535 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904679 kubelet[3039]: I0625 14:56:24.904556 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03ab144c7a2b275235aadba34ec9763f-k8s-certs\") pod \"kube-apiserver-ci-3815.2.4-a-2c7c8223bb\" (UID: \"03ab144c7a2b275235aadba34ec9763f\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904679 kubelet[3039]: I0625 14:56:24.904581 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03ab144c7a2b275235aadba34ec9763f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.4-a-2c7c8223bb\" (UID: \"03ab144c7a2b275235aadba34ec9763f\") " pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904679 kubelet[3039]: I0625 14:56:24.904600 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904807 kubelet[3039]: I0625 14:56:24.904621 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:24.904807 kubelet[3039]: I0625 14:56:24.904645 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cbef7239464e167901683920f3b8409-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" (UID: \"6cbef7239464e167901683920f3b8409\") " pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:25.477389 kubelet[3039]: I0625 14:56:25.477356 3039 apiserver.go:52] "Watching apiserver" Jun 25 14:56:25.502525 kubelet[3039]: I0625 14:56:25.502478 3039 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:56:25.666997 kubelet[3039]: W0625 14:56:25.666968 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:56:25.667226 kubelet[3039]: E0625 14:56:25.667207 3039 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3815.2.4-a-2c7c8223bb\" already exists" pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:25.667829 kubelet[3039]: W0625 14:56:25.667020 3039 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 14:56:25.667981 kubelet[3039]: E0625 14:56:25.667966 3039 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3815.2.4-a-2c7c8223bb\" already exists" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" Jun 25 14:56:25.741905 kubelet[3039]: I0625 14:56:25.741780 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.4-a-2c7c8223bb" podStartSLOduration=1.740680024 podCreationTimestamp="2024-06-25 14:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:56:25.698623484 +0000 UTC m=+1.309814361" watchObservedRunningTime="2024-06-25 14:56:25.740680024 +0000 UTC m=+1.351870861" Jun 25 14:56:25.797121 kubelet[3039]: I0625 14:56:25.797083 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.4-a-2c7c8223bb" podStartSLOduration=1.797039525 podCreationTimestamp="2024-06-25 14:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:56:25.741961256 +0000 UTC m=+1.353152133" watchObservedRunningTime="2024-06-25 14:56:25.797039525 +0000 UTC m=+1.408230402" Jun 25 14:56:26.148994 kubelet[3039]: I0625 14:56:26.148870 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.4-a-2c7c8223bb" podStartSLOduration=2.1488182399999998 podCreationTimestamp="2024-06-25 14:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:56:25.803829496 +0000 UTC m=+1.415020373" watchObservedRunningTime="2024-06-25 14:56:26.14881824 +0000 UTC m=+1.760009117" Jun 25 14:56:29.413506 sudo[2177]: pam_unix(sudo:session): session closed for user root Jun 25 14:56:29.412000 audit[2177]: USER_END pid=2177 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:56:29.412000 audit[2177]: CRED_DISP pid=2177 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:56:29.452596 kernel: audit: type=1106 audit(1719327389.412:219): pid=2177 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:56:29.452728 kernel: audit: type=1104 audit(1719327389.412:220): pid=2177 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:56:29.488448 sshd[2173]: pam_unix(sshd:session): session closed for user core Jun 25 14:56:29.487000 audit[2173]: USER_END pid=2173 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.488000 audit[2173]: CRED_DISP pid=2173 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.536414 kernel: audit: type=1106 audit(1719327389.487:221): pid=2173 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.536505 kernel: audit: type=1104 audit(1719327389.488:222): pid=2173 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:56:29.516645 systemd[1]: sshd@6-10.200.20.26:22-10.200.16.10:60420.service: Deactivated successfully. Jun 25 14:56:29.517519 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:56:29.535418 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:56:29.536347 systemd-logind[1576]: Removed session 9. Jun 25 14:56:29.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.26:22-10.200.16.10:60420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:29.556413 kernel: audit: type=1131 audit(1719327389.515:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.26:22-10.200.16.10:60420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:56:36.939745 kubelet[3039]: I0625 14:56:36.939710 3039 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:56:36.940133 containerd[1604]: time="2024-06-25T14:56:36.940067174Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:56:36.940344 kubelet[3039]: I0625 14:56:36.940257 3039 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:56:37.953320 kubelet[3039]: I0625 14:56:37.953277 3039 topology_manager.go:215] "Topology Admit Handler" podUID="63da3b3a-1274-4a45-98cf-18119bb84894" podNamespace="kube-system" podName="kube-proxy-6dsp4" Jun 25 14:56:38.075099 kubelet[3039]: I0625 14:56:38.075070 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63da3b3a-1274-4a45-98cf-18119bb84894-kube-proxy\") pod \"kube-proxy-6dsp4\" (UID: \"63da3b3a-1274-4a45-98cf-18119bb84894\") " pod="kube-system/kube-proxy-6dsp4" Jun 25 14:56:38.075318 kubelet[3039]: I0625 14:56:38.075304 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63da3b3a-1274-4a45-98cf-18119bb84894-xtables-lock\") pod \"kube-proxy-6dsp4\" (UID: \"63da3b3a-1274-4a45-98cf-18119bb84894\") " pod="kube-system/kube-proxy-6dsp4" Jun 25 14:56:38.075431 kubelet[3039]: I0625 14:56:38.075419 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2ss\" (UniqueName: \"kubernetes.io/projected/63da3b3a-1274-4a45-98cf-18119bb84894-kube-api-access-zx2ss\") pod \"kube-proxy-6dsp4\" (UID: \"63da3b3a-1274-4a45-98cf-18119bb84894\") " pod="kube-system/kube-proxy-6dsp4" Jun 25 14:56:38.075508 kubelet[3039]: I0625 14:56:38.075499 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63da3b3a-1274-4a45-98cf-18119bb84894-lib-modules\") pod \"kube-proxy-6dsp4\" (UID: \"63da3b3a-1274-4a45-98cf-18119bb84894\") " pod="kube-system/kube-proxy-6dsp4" Jun 25 14:56:38.157968 kubelet[3039]: I0625 14:56:38.157926 3039 topology_manager.go:215] "Topology Admit Handler" podUID="5521d752-499b-4dc0-a8f0-bd10ae1d1075" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-kqbg5" Jun 25 14:56:38.176360 kubelet[3039]: I0625 14:56:38.176322 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5521d752-499b-4dc0-a8f0-bd10ae1d1075-var-lib-calico\") pod \"tigera-operator-76c4974c85-kqbg5\" (UID: \"5521d752-499b-4dc0-a8f0-bd10ae1d1075\") " pod="tigera-operator/tigera-operator-76c4974c85-kqbg5" Jun 25 14:56:38.176507 kubelet[3039]: I0625 14:56:38.176397 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8wh\" (UniqueName: \"kubernetes.io/projected/5521d752-499b-4dc0-a8f0-bd10ae1d1075-kube-api-access-gx8wh\") pod \"tigera-operator-76c4974c85-kqbg5\" (UID: \"5521d752-499b-4dc0-a8f0-bd10ae1d1075\") " pod="tigera-operator/tigera-operator-76c4974c85-kqbg5" Jun 25 14:56:38.461409 containerd[1604]: time="2024-06-25T14:56:38.461318539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-kqbg5,Uid:5521d752-499b-4dc0-a8f0-bd10ae1d1075,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:56:38.557774 containerd[1604]: time="2024-06-25T14:56:38.557725305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dsp4,Uid:63da3b3a-1274-4a45-98cf-18119bb84894,Namespace:kube-system,Attempt:0,}" Jun 25 14:56:39.767337 containerd[1604]: time="2024-06-25T14:56:39.767138504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:56:39.767717 containerd[1604]: time="2024-06-25T14:56:39.767315068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:39.767717 containerd[1604]: time="2024-06-25T14:56:39.767339028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:56:39.767717 containerd[1604]: time="2024-06-25T14:56:39.767353389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:39.786439 systemd[1]: run-containerd-runc-k8s.io-1096f15f23a7594fed4f68e531433007b21be568c1bf61158211775c4b5adc78-runc.7tLR6l.mount: Deactivated successfully. Jun 25 14:56:39.823949 containerd[1604]: time="2024-06-25T14:56:39.823898340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-kqbg5,Uid:5521d752-499b-4dc0-a8f0-bd10ae1d1075,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1096f15f23a7594fed4f68e531433007b21be568c1bf61158211775c4b5adc78\"" Jun 25 14:56:39.826218 containerd[1604]: time="2024-06-25T14:56:39.826190545Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:56:39.829271 containerd[1604]: time="2024-06-25T14:56:39.829177684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:56:39.829271 containerd[1604]: time="2024-06-25T14:56:39.829233725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:39.829463 containerd[1604]: time="2024-06-25T14:56:39.829252766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:56:39.829545 containerd[1604]: time="2024-06-25T14:56:39.829447809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:39.862842 containerd[1604]: time="2024-06-25T14:56:39.862794025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6dsp4,Uid:63da3b3a-1274-4a45-98cf-18119bb84894,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e2ebbe13ad5c2e140c4a79294fce5413426c6c20efd1f00ad4a979df86650cb\"" Jun 25 14:56:39.867521 containerd[1604]: time="2024-06-25T14:56:39.866648021Z" level=info msg="CreateContainer within sandbox \"4e2ebbe13ad5c2e140c4a79294fce5413426c6c20efd1f00ad4a979df86650cb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:56:40.492484 containerd[1604]: time="2024-06-25T14:56:40.492433572Z" level=info msg="CreateContainer within sandbox \"4e2ebbe13ad5c2e140c4a79294fce5413426c6c20efd1f00ad4a979df86650cb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ba8089b3e271dc057a58b042291a9606ae800946a1caff9e4a7a5e10298ed6d\"" Jun 25 14:56:40.494727 containerd[1604]: time="2024-06-25T14:56:40.494003002Z" level=info msg="StartContainer for \"7ba8089b3e271dc057a58b042291a9606ae800946a1caff9e4a7a5e10298ed6d\"" Jun 25 14:56:40.586355 containerd[1604]: time="2024-06-25T14:56:40.586027983Z" level=info msg="StartContainer for \"7ba8089b3e271dc057a58b042291a9606ae800946a1caff9e4a7a5e10298ed6d\" returns successfully" Jun 25 14:56:40.600000 audit[3261]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.600000 audit[3261]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc947960 a2=0 a3=1 items=0 ppid=3220 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.640465 kernel: audit: type=1325 audit(1719327400.600:224): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.640586 kernel: audit: type=1300 audit(1719327400.600:224): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc947960 a2=0 a3=1 items=0 ppid=3220 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:56:40.654736 kernel: audit: type=1327 audit(1719327400.600:224): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:56:40.604000 audit[3262]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.668333 kernel: audit: type=1325 audit(1719327400.604:225): table=mangle:42 family=2 entries=1 op=nft_register_chain pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.604000 audit[3262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2785ef0 a2=0 a3=1 items=0 ppid=3220 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.694628 kernel: audit: type=1300 audit(1719327400.604:225): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc2785ef0 a2=0 a3=1 items=0 ppid=3220 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:56:40.709347 kernel: audit: type=1327 audit(1719327400.604:225): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:56:40.606000 audit[3263]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.723500 kernel: audit: type=1325 audit(1719327400.606:226): table=nat:43 family=2 entries=1 op=nft_register_chain pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.606000 audit[3263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd63ef330 a2=0 a3=1 items=0 ppid=3220 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.751299 kernel: audit: type=1300 audit(1719327400.606:226): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd63ef330 a2=0 a3=1 items=0 ppid=3220 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:56:40.766044 kernel: audit: type=1327 audit(1719327400.606:226): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:56:40.769957 kernel: audit: type=1325 audit(1719327400.607:227): table=filter:44 family=2 entries=1 op=nft_register_chain pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.607000 audit[3264]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.607000 audit[3264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffddb33930 a2=0 a3=1 items=0 ppid=3220 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:56:40.618000 audit[3265]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.618000 audit[3265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff44add20 a2=0 a3=1 items=0 ppid=3220 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.618000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:56:40.639000 audit[3266]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.639000 audit[3266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9c34890 a2=0 a3=1 items=0 ppid=3220 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:56:40.703000 audit[3267]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.703000 audit[3267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc7dd1150 a2=0 a3=1 items=0 ppid=3220 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:56:40.706000 audit[3269]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.706000 audit[3269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffa366d70 a2=0 a3=1 items=0 ppid=3220 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:56:40.710000 audit[3272]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.710000 audit[3272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2de8380 a2=0 a3=1 items=0 ppid=3220 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:56:40.712000 audit[3273]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.712000 audit[3273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc74c0be0 a2=0 a3=1 items=0 ppid=3220 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:56:40.714000 audit[3275]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.714000 audit[3275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd6aac840 a2=0 a3=1 items=0 ppid=3220 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:56:40.716000 audit[3276]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.716000 audit[3276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef99a2c0 a2=0 a3=1 items=0 ppid=3220 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:56:40.719000 audit[3278]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3278 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.719000 audit[3278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffce8fc8f0 a2=0 a3=1 items=0 ppid=3220 pid=3278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:56:40.727000 audit[3281]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=3281 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.727000 audit[3281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb719170 a2=0 a3=1 items=0 ppid=3220 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:56:40.729000 audit[3282]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.729000 audit[3282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb508240 a2=0 a3=1 items=0 ppid=3220 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:56:40.732000 audit[3284]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.732000 audit[3284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc85a7d0 a2=0 a3=1 items=0 ppid=3220 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:56:40.733000 audit[3285]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.733000 audit[3285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff4327580 a2=0 a3=1 items=0 ppid=3220 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:56:40.736000 audit[3287]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.736000 audit[3287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc7a54860 a2=0 a3=1 items=0 ppid=3220 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:56:40.740000 audit[3290]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.740000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe9beba20 a2=0 a3=1 items=0 ppid=3220 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:56:40.786000 audit[3293]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.786000 audit[3293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffdc4d330 a2=0 a3=1 items=0 ppid=3220 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:56:40.788000 audit[3294]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.788000 audit[3294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc9ec10d0 a2=0 a3=1 items=0 ppid=3220 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:56:40.790000 audit[3296]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.790000 audit[3296]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffe4928d0 a2=0 a3=1 items=0 ppid=3220 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.790000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:56:40.794000 audit[3299]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.794000 audit[3299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffe6cc2f0 a2=0 a3=1 items=0 ppid=3220 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:56:40.796000 audit[3300]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.796000 audit[3300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffbe34d80 a2=0 a3=1 items=0 ppid=3220 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:56:40.798000 audit[3302]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:56:40.798000 audit[3302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff9a21310 a2=0 a3=1 items=0 ppid=3220 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:56:40.819000 audit[3308]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:40.819000 audit[3308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffd016cdb0 a2=0 a3=1 items=0 ppid=3220 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:40.833000 audit[3308]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:40.833000 audit[3308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd016cdb0 a2=0 a3=1 items=0 ppid=3220 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.833000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:40.835000 audit[3314]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.835000 audit[3314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff708e040 a2=0 a3=1 items=0 ppid=3220 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:56:40.837000 audit[3316]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.837000 audit[3316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffda477bf0 a2=0 a3=1 items=0 ppid=3220 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:56:40.841000 audit[3319]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=3319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.841000 audit[3319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff810a1f0 a2=0 a3=1 items=0 ppid=3220 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:56:40.842000 audit[3320]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=3320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.842000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd35a9b0 a2=0 a3=1 items=0 ppid=3220 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:56:40.845000 audit[3322]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.845000 audit[3322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef385810 a2=0 a3=1 items=0 ppid=3220 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:56:40.847000 audit[3323]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.847000 audit[3323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff02fe950 a2=0 a3=1 items=0 ppid=3220 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:56:40.850000 audit[3325]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.850000 audit[3325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffee860890 a2=0 a3=1 items=0 ppid=3220 pid=3325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:56:40.853000 audit[3328]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=3328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.853000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe52cb390 a2=0 a3=1 items=0 ppid=3220 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:56:40.855000 audit[3329]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=3329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.855000 audit[3329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc573d9c0 a2=0 a3=1 items=0 ppid=3220 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.855000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:56:40.857000 audit[3331]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.857000 audit[3331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc192f20 a2=0 a3=1 items=0 ppid=3220 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.857000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:56:40.858000 audit[3332]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=3332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.858000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1a28810 a2=0 a3=1 items=0 ppid=3220 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.858000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:56:40.861000 audit[3334]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.861000 audit[3334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe8d06350 a2=0 a3=1 items=0 ppid=3220 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.861000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:56:40.865000 audit[3337]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.865000 audit[3337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc12eecf0 a2=0 a3=1 items=0 ppid=3220 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:56:40.869000 audit[3340]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.869000 audit[3340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff54a16a0 a2=0 a3=1 items=0 ppid=3220 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:56:40.870000 audit[3341]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3341 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.870000 audit[3341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2fa65e0 a2=0 a3=1 items=0 ppid=3220 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:56:40.872000 audit[3343]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3343 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.872000 audit[3343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd493a560 a2=0 a3=1 items=0 ppid=3220 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:56:40.876000 audit[3346]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=3346 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.876000 audit[3346]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffcbd1b2e0 a2=0 a3=1 items=0 ppid=3220 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:56:40.877000 audit[3347]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=3347 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.877000 audit[3347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecc9a000 a2=0 a3=1 items=0 ppid=3220 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:56:40.879000 audit[3349]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=3349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.879000 audit[3349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdf5cd660 a2=0 a3=1 items=0 ppid=3220 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:56:40.880000 audit[3350]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.880000 audit[3350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7a89600 a2=0 a3=1 items=0 ppid=3220 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:56:40.882000 audit[3352]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.882000 audit[3352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffeb5338e0 a2=0 a3=1 items=0 ppid=3220 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:56:40.886000 audit[3355]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:56:40.886000 audit[3355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd970cb50 a2=0 a3=1 items=0 ppid=3220 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:56:40.889000 audit[3357]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:56:40.889000 audit[3357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffcd5e6070 a2=0 a3=1 items=0 ppid=3220 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.889000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:40.889000 audit[3357]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=3357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:56:40.889000 audit[3357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffcd5e6070 a2=0 a3=1 items=0 ppid=3220 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:40.889000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:41.599179 kubelet[3039]: I0625 14:56:41.599007 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6dsp4" podStartSLOduration=4.5989712449999995 podCreationTimestamp="2024-06-25 14:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:56:41.598786921 +0000 UTC m=+17.209977798" watchObservedRunningTime="2024-06-25 14:56:41.598971245 +0000 UTC m=+17.210162122" Jun 25 14:56:44.680935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936398327.mount: Deactivated successfully. Jun 25 14:56:45.641816 containerd[1604]: time="2024-06-25T14:56:45.641766566Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:45.688371 containerd[1604]: time="2024-06-25T14:56:45.688317801Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473682" Jun 25 14:56:45.750420 containerd[1604]: time="2024-06-25T14:56:45.750362394Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:45.796265 containerd[1604]: time="2024-06-25T14:56:45.796218697Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:45.800931 containerd[1604]: time="2024-06-25T14:56:45.800879940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:45.802505 containerd[1604]: time="2024-06-25T14:56:45.802474929Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 5.976038939s" Jun 25 14:56:45.802640 containerd[1604]: time="2024-06-25T14:56:45.802607171Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:56:45.805531 containerd[1604]: time="2024-06-25T14:56:45.805492863Z" level=info msg="CreateContainer within sandbox \"1096f15f23a7594fed4f68e531433007b21be568c1bf61158211775c4b5adc78\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:56:46.246248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893711963.mount: Deactivated successfully. Jun 25 14:56:46.251360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089481620.mount: Deactivated successfully. Jun 25 14:56:46.396701 containerd[1604]: time="2024-06-25T14:56:46.396622688Z" level=info msg="CreateContainer within sandbox \"1096f15f23a7594fed4f68e531433007b21be568c1bf61158211775c4b5adc78\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"10ee202b534874398dde640e7d6bf3e5a421222a64bba7174fd88575b2fef3e6\"" Jun 25 14:56:46.398592 containerd[1604]: time="2024-06-25T14:56:46.397125377Z" level=info msg="StartContainer for \"10ee202b534874398dde640e7d6bf3e5a421222a64bba7174fd88575b2fef3e6\"" Jun 25 14:56:46.448513 containerd[1604]: time="2024-06-25T14:56:46.448467765Z" level=info msg="StartContainer for \"10ee202b534874398dde640e7d6bf3e5a421222a64bba7174fd88575b2fef3e6\" returns successfully" Jun 25 14:56:51.421000 audit[3405]: NETFILTER_CFG table=filter:92 family=2 entries=15 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.427479 kernel: kauditd_printk_skb: 143 callbacks suppressed Jun 25 14:56:51.427553 kernel: audit: type=1325 audit(1719327411.421:275): table=filter:92 family=2 entries=15 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.421000 audit[3405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd3897d30 a2=0 a3=1 items=0 ppid=3220 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:51.466431 kernel: audit: type=1300 audit(1719327411.421:275): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd3897d30 a2=0 a3=1 items=0 ppid=3220 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:51.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:51.479664 kernel: audit: type=1327 audit(1719327411.421:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:51.422000 audit[3405]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.492621 kernel: audit: type=1325 audit(1719327411.422:276): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.422000 audit[3405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3897d30 a2=0 a3=1 items=0 ppid=3220 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:51.517677 kernel: audit: type=1300 audit(1719327411.422:276): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd3897d30 a2=0 a3=1 items=0 ppid=3220 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:51.422000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:51.531130 kernel: audit: type=1327 audit(1719327411.422:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:51.479000 audit[3407]: NETFILTER_CFG table=filter:94 family=2 entries=16 op=nft_register_rule pid=3407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.544793 kernel: audit: type=1325 audit(1719327411.479:277): table=filter:94 family=2 entries=16 op=nft_register_rule pid=3407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.479000 audit[3407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe66a1e30 a2=0 a3=1 items=0 ppid=3220 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:51.570858 kernel: audit: type=1300 audit(1719327411.479:277): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe66a1e30 a2=0 a3=1 items=0 ppid=3220 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:51.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:51.583825 kernel: audit: type=1327 audit(1719327411.479:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:51.492000 audit[3407]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.596711 kernel: audit: type=1325 audit(1719327411.492:278): table=nat:95 family=2 entries=12 op=nft_register_rule pid=3407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:51.492000 audit[3407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe66a1e30 a2=0 a3=1 items=0 ppid=3220 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:51.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:51.604282 kubelet[3039]: I0625 14:56:51.604236 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-kqbg5" podStartSLOduration=7.626532714 podCreationTimestamp="2024-06-25 14:56:38 +0000 UTC" firstStartedPulling="2024-06-25 14:56:39.825320088 +0000 UTC m=+15.436510965" lastFinishedPulling="2024-06-25 14:56:45.802982618 +0000 UTC m=+21.414173495" observedRunningTime="2024-06-25 14:56:46.606435439 +0000 UTC m=+22.217626316" watchObservedRunningTime="2024-06-25 14:56:51.604195244 +0000 UTC m=+27.215386121" Jun 25 14:56:51.604834 kubelet[3039]: I0625 14:56:51.604817 3039 topology_manager.go:215] "Topology Admit Handler" podUID="1d5280f9-138f-48dc-a70e-c6611881ab28" podNamespace="calico-system" podName="calico-typha-648786c6d7-jtqkt" Jun 25 14:56:51.643173 kubelet[3039]: I0625 14:56:51.643138 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bddjd\" (UniqueName: \"kubernetes.io/projected/1d5280f9-138f-48dc-a70e-c6611881ab28-kube-api-access-bddjd\") pod \"calico-typha-648786c6d7-jtqkt\" (UID: \"1d5280f9-138f-48dc-a70e-c6611881ab28\") " pod="calico-system/calico-typha-648786c6d7-jtqkt" Jun 25 14:56:51.643173 kubelet[3039]: I0625 14:56:51.643185 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d5280f9-138f-48dc-a70e-c6611881ab28-tigera-ca-bundle\") pod \"calico-typha-648786c6d7-jtqkt\" (UID: \"1d5280f9-138f-48dc-a70e-c6611881ab28\") " pod="calico-system/calico-typha-648786c6d7-jtqkt" Jun 25 14:56:51.643362 kubelet[3039]: I0625 14:56:51.643206 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1d5280f9-138f-48dc-a70e-c6611881ab28-typha-certs\") pod \"calico-typha-648786c6d7-jtqkt\" (UID: \"1d5280f9-138f-48dc-a70e-c6611881ab28\") " pod="calico-system/calico-typha-648786c6d7-jtqkt" Jun 25 14:56:51.664824 kubelet[3039]: I0625 14:56:51.664794 3039 topology_manager.go:215] "Topology Admit Handler" podUID="b985a9d2-b07e-47f8-8f5f-664717024a4b" podNamespace="calico-system" podName="calico-node-qsl6f" Jun 25 14:56:51.743893 kubelet[3039]: I0625 14:56:51.743852 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-policysync\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.743893 kubelet[3039]: I0625 14:56:51.743894 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b985a9d2-b07e-47f8-8f5f-664717024a4b-node-certs\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744047 kubelet[3039]: I0625 14:56:51.743914 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-cni-net-dir\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744047 kubelet[3039]: I0625 14:56:51.743935 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b985a9d2-b07e-47f8-8f5f-664717024a4b-tigera-ca-bundle\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744047 kubelet[3039]: I0625 14:56:51.743966 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8px9w\" (UniqueName: \"kubernetes.io/projected/b985a9d2-b07e-47f8-8f5f-664717024a4b-kube-api-access-8px9w\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744047 kubelet[3039]: I0625 14:56:51.743986 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-var-run-calico\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744047 kubelet[3039]: I0625 14:56:51.744007 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-xtables-lock\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744187 kubelet[3039]: I0625 14:56:51.744027 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-var-lib-calico\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744187 kubelet[3039]: I0625 14:56:51.744047 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-cni-log-dir\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744187 kubelet[3039]: I0625 14:56:51.744065 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-flexvol-driver-host\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744187 kubelet[3039]: I0625 14:56:51.744097 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-lib-modules\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.744187 kubelet[3039]: I0625 14:56:51.744114 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b985a9d2-b07e-47f8-8f5f-664717024a4b-cni-bin-dir\") pod \"calico-node-qsl6f\" (UID: \"b985a9d2-b07e-47f8-8f5f-664717024a4b\") " pod="calico-system/calico-node-qsl6f" Jun 25 14:56:51.782919 kubelet[3039]: I0625 14:56:51.782886 3039 topology_manager.go:215] "Topology Admit Handler" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" podNamespace="calico-system" podName="csi-node-driver-6rz48" Jun 25 14:56:51.783388 kubelet[3039]: E0625 14:56:51.783366 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:56:51.844325 kubelet[3039]: I0625 14:56:51.844278 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7813833d-a993-419c-9c27-7d6c8ce9f5ba-registration-dir\") pod \"csi-node-driver-6rz48\" (UID: \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\") " pod="calico-system/csi-node-driver-6rz48" Jun 25 14:56:51.844521 kubelet[3039]: I0625 14:56:51.844510 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7813833d-a993-419c-9c27-7d6c8ce9f5ba-kubelet-dir\") pod \"csi-node-driver-6rz48\" (UID: \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\") " pod="calico-system/csi-node-driver-6rz48" Jun 25 14:56:51.844698 kubelet[3039]: I0625 14:56:51.844687 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7813833d-a993-419c-9c27-7d6c8ce9f5ba-socket-dir\") pod \"csi-node-driver-6rz48\" (UID: \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\") " pod="calico-system/csi-node-driver-6rz48" Jun 25 14:56:51.844810 kubelet[3039]: I0625 14:56:51.844800 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjcq5\" (UniqueName: \"kubernetes.io/projected/7813833d-a993-419c-9c27-7d6c8ce9f5ba-kube-api-access-hjcq5\") pod \"csi-node-driver-6rz48\" (UID: \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\") " pod="calico-system/csi-node-driver-6rz48" Jun 25 14:56:51.844925 kubelet[3039]: I0625 14:56:51.844915 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7813833d-a993-419c-9c27-7d6c8ce9f5ba-varrun\") pod \"csi-node-driver-6rz48\" (UID: \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\") " pod="calico-system/csi-node-driver-6rz48" Jun 25 14:56:51.847078 kubelet[3039]: E0625 14:56:51.847062 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.847214 kubelet[3039]: W0625 14:56:51.847201 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.847310 kubelet[3039]: E0625 14:56:51.847300 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.855456 kubelet[3039]: E0625 14:56:51.855435 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.855584 kubelet[3039]: W0625 14:56:51.855569 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.855686 kubelet[3039]: E0625 14:56:51.855676 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.885313 kubelet[3039]: E0625 14:56:51.885272 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.885466 kubelet[3039]: W0625 14:56:51.885450 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.885544 kubelet[3039]: E0625 14:56:51.885534 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.909826 containerd[1604]: time="2024-06-25T14:56:51.909416647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-648786c6d7-jtqkt,Uid:1d5280f9-138f-48dc-a70e-c6611881ab28,Namespace:calico-system,Attempt:0,}" Jun 25 14:56:51.946612 kubelet[3039]: E0625 14:56:51.946585 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.946612 kubelet[3039]: W0625 14:56:51.946607 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.946974 kubelet[3039]: E0625 14:56:51.946629 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.948486 kubelet[3039]: E0625 14:56:51.948464 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.948486 kubelet[3039]: W0625 14:56:51.948481 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.948654 kubelet[3039]: E0625 14:56:51.948505 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.949110 kubelet[3039]: E0625 14:56:51.949084 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.949110 kubelet[3039]: W0625 14:56:51.949105 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.949366 kubelet[3039]: E0625 14:56:51.949223 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.950372 kubelet[3039]: E0625 14:56:51.950345 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.950372 kubelet[3039]: W0625 14:56:51.950366 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.950637 kubelet[3039]: E0625 14:56:51.950534 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.951177 kubelet[3039]: E0625 14:56:51.950875 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.951177 kubelet[3039]: W0625 14:56:51.950892 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.951177 kubelet[3039]: E0625 14:56:51.951145 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.951177 kubelet[3039]: E0625 14:56:51.951161 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.951177 kubelet[3039]: W0625 14:56:51.951169 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.951535 kubelet[3039]: E0625 14:56:51.951331 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.951817 kubelet[3039]: E0625 14:56:51.951790 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.951817 kubelet[3039]: W0625 14:56:51.951817 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.951987 kubelet[3039]: E0625 14:56:51.951921 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.952048 kubelet[3039]: E0625 14:56:51.952001 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.952048 kubelet[3039]: W0625 14:56:51.952009 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.952142 kubelet[3039]: E0625 14:56:51.952122 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.952249 kubelet[3039]: E0625 14:56:51.952230 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.952249 kubelet[3039]: W0625 14:56:51.952243 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.952346 kubelet[3039]: E0625 14:56:51.952260 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.952786 kubelet[3039]: E0625 14:56:51.952762 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.952786 kubelet[3039]: W0625 14:56:51.952782 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.952971 kubelet[3039]: E0625 14:56:51.952923 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.953064 containerd[1604]: time="2024-06-25T14:56:51.952693202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:56:51.953132 kubelet[3039]: E0625 14:56:51.953118 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.953132 kubelet[3039]: W0625 14:56:51.953128 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.953381 kubelet[3039]: E0625 14:56:51.953218 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.953685 kubelet[3039]: E0625 14:56:51.953662 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.953685 kubelet[3039]: W0625 14:56:51.953682 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.953817 kubelet[3039]: E0625 14:56:51.953789 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.953867 kubelet[3039]: E0625 14:56:51.953847 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.953867 kubelet[3039]: W0625 14:56:51.953854 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.954031 kubelet[3039]: E0625 14:56:51.953933 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.954833 containerd[1604]: time="2024-06-25T14:56:51.954273948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:51.954833 containerd[1604]: time="2024-06-25T14:56:51.954352749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:56:51.954833 containerd[1604]: time="2024-06-25T14:56:51.954363949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:51.955036 kubelet[3039]: E0625 14:56:51.954901 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.955350 kubelet[3039]: W0625 14:56:51.955027 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.955594 kubelet[3039]: E0625 14:56:51.955464 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.955708 kubelet[3039]: E0625 14:56:51.955690 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.955708 kubelet[3039]: W0625 14:56:51.955705 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.956115 kubelet[3039]: E0625 14:56:51.955805 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.958607 kubelet[3039]: E0625 14:56:51.958406 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.958607 kubelet[3039]: W0625 14:56:51.958425 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.958607 kubelet[3039]: E0625 14:56:51.958515 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.958939 kubelet[3039]: E0625 14:56:51.958777 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.958939 kubelet[3039]: W0625 14:56:51.958787 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.958939 kubelet[3039]: E0625 14:56:51.958856 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.959340 kubelet[3039]: E0625 14:56:51.959098 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.959340 kubelet[3039]: W0625 14:56:51.959108 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.959340 kubelet[3039]: E0625 14:56:51.959175 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.959552 kubelet[3039]: E0625 14:56:51.959522 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.959552 kubelet[3039]: W0625 14:56:51.959532 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.959827 kubelet[3039]: E0625 14:56:51.959719 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.959956 kubelet[3039]: E0625 14:56:51.959927 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.959956 kubelet[3039]: W0625 14:56:51.959936 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.960218 kubelet[3039]: E0625 14:56:51.960127 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.960635 kubelet[3039]: E0625 14:56:51.960418 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.960635 kubelet[3039]: W0625 14:56:51.960428 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.960635 kubelet[3039]: E0625 14:56:51.960530 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.960921 kubelet[3039]: E0625 14:56:51.960761 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.960921 kubelet[3039]: W0625 14:56:51.960772 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.960921 kubelet[3039]: E0625 14:56:51.960838 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.961302 kubelet[3039]: E0625 14:56:51.961097 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.961302 kubelet[3039]: W0625 14:56:51.961107 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.961302 kubelet[3039]: E0625 14:56:51.961122 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.961594 kubelet[3039]: E0625 14:56:51.961428 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.961594 kubelet[3039]: W0625 14:56:51.961438 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.961594 kubelet[3039]: E0625 14:56:51.961454 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.961766 kubelet[3039]: E0625 14:56:51.961724 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.961766 kubelet[3039]: W0625 14:56:51.961733 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.961766 kubelet[3039]: E0625 14:56:51.961746 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.970312 kubelet[3039]: E0625 14:56:51.970270 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:51.970312 kubelet[3039]: W0625 14:56:51.970311 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:51.971064 kubelet[3039]: E0625 14:56:51.970330 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:51.971785 containerd[1604]: time="2024-06-25T14:56:51.971748677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsl6f,Uid:b985a9d2-b07e-47f8-8f5f-664717024a4b,Namespace:calico-system,Attempt:0,}" Jun 25 14:56:52.032661 containerd[1604]: time="2024-06-25T14:56:52.023493567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-648786c6d7-jtqkt,Uid:1d5280f9-138f-48dc-a70e-c6611881ab28,Namespace:calico-system,Attempt:0,} returns sandbox id \"81b49d95cfeae4a1ab842f97131ded6f445c6fc7d95b9da168ffa098022b7504\"" Jun 25 14:56:52.033477 containerd[1604]: time="2024-06-25T14:56:52.032935921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:56:52.033477 containerd[1604]: time="2024-06-25T14:56:52.032991762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:52.033477 containerd[1604]: time="2024-06-25T14:56:52.033009762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:56:52.033477 containerd[1604]: time="2024-06-25T14:56:52.033023202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:56:52.035993 containerd[1604]: time="2024-06-25T14:56:52.035829928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:56:52.075812 containerd[1604]: time="2024-06-25T14:56:52.075533536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qsl6f,Uid:b985a9d2-b07e-47f8-8f5f-664717024a4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2d3f9247802889e797ed4e9d7781f4d797bb2353c2e445a6dfeeea6776353d4\"" Jun 25 14:56:52.531000 audit[3528]: NETFILTER_CFG table=filter:96 family=2 entries=16 op=nft_register_rule pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:52.531000 audit[3528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc67c3b90 a2=0 a3=1 items=0 ppid=3220 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:52.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:52.532000 audit[3528]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:52.532000 audit[3528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc67c3b90 a2=0 a3=1 items=0 ppid=3220 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:52.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:53.507124 kubelet[3039]: E0625 14:56:53.506593 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:56:54.133692 containerd[1604]: time="2024-06-25T14:56:54.133640805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:54.136337 containerd[1604]: time="2024-06-25T14:56:54.136299887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:56:54.141019 containerd[1604]: time="2024-06-25T14:56:54.140989762Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:54.145950 containerd[1604]: time="2024-06-25T14:56:54.145903520Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:54.149802 containerd[1604]: time="2024-06-25T14:56:54.149773902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:54.150745 containerd[1604]: time="2024-06-25T14:56:54.150629235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.114287299s" Jun 25 14:56:54.151264 containerd[1604]: time="2024-06-25T14:56:54.150696916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:56:54.153780 containerd[1604]: time="2024-06-25T14:56:54.153548642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:56:54.167023 containerd[1604]: time="2024-06-25T14:56:54.166984295Z" level=info msg="CreateContainer within sandbox \"81b49d95cfeae4a1ab842f97131ded6f445c6fc7d95b9da168ffa098022b7504\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:56:54.197745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789101200.mount: Deactivated successfully. Jun 25 14:56:54.217364 containerd[1604]: time="2024-06-25T14:56:54.217258655Z" level=info msg="CreateContainer within sandbox \"81b49d95cfeae4a1ab842f97131ded6f445c6fc7d95b9da168ffa098022b7504\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a78c07d6faee72bf9894bb406715eebd3c442a0c403c3f0a0412122dbda60721\"" Jun 25 14:56:54.218062 containerd[1604]: time="2024-06-25T14:56:54.217860385Z" level=info msg="StartContainer for \"a78c07d6faee72bf9894bb406715eebd3c442a0c403c3f0a0412122dbda60721\"" Jun 25 14:56:54.278676 containerd[1604]: time="2024-06-25T14:56:54.278614911Z" level=info msg="StartContainer for \"a78c07d6faee72bf9894bb406715eebd3c442a0c403c3f0a0412122dbda60721\" returns successfully" Jun 25 14:56:54.625063 kubelet[3039]: I0625 14:56:54.625024 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-648786c6d7-jtqkt" podStartSLOduration=1.506515774 podCreationTimestamp="2024-06-25 14:56:51 +0000 UTC" firstStartedPulling="2024-06-25 14:56:52.033748214 +0000 UTC m=+27.644939091" lastFinishedPulling="2024-06-25 14:56:54.15220922 +0000 UTC m=+29.763400097" observedRunningTime="2024-06-25 14:56:54.622861987 +0000 UTC m=+30.234052864" watchObservedRunningTime="2024-06-25 14:56:54.62497678 +0000 UTC m=+30.236167657" Jun 25 14:56:54.658000 audit[3581]: NETFILTER_CFG table=filter:98 family=2 entries=15 op=nft_register_rule pid=3581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:54.658000 audit[3581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffdc0ca580 a2=0 a3=1 items=0 ppid=3220 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:54.658000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:54.663912 kubelet[3039]: E0625 14:56:54.663773 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.663912 kubelet[3039]: W0625 14:56:54.663791 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.663912 kubelet[3039]: E0625 14:56:54.663823 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.664255 kubelet[3039]: E0625 14:56:54.664109 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.664255 kubelet[3039]: W0625 14:56:54.664121 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.664255 kubelet[3039]: E0625 14:56:54.664149 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.664583 kubelet[3039]: E0625 14:56:54.664468 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.664583 kubelet[3039]: W0625 14:56:54.664478 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.664583 kubelet[3039]: E0625 14:56:54.664491 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.664908 kubelet[3039]: E0625 14:56:54.664751 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.664908 kubelet[3039]: W0625 14:56:54.664761 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.664908 kubelet[3039]: E0625 14:56:54.664774 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.665184 kubelet[3039]: E0625 14:56:54.665078 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.665184 kubelet[3039]: W0625 14:56:54.665089 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.665184 kubelet[3039]: E0625 14:56:54.665101 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.665479 kubelet[3039]: E0625 14:56:54.665376 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.665479 kubelet[3039]: W0625 14:56:54.665386 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.665479 kubelet[3039]: E0625 14:56:54.665397 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.665764 kubelet[3039]: E0625 14:56:54.665652 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.665764 kubelet[3039]: W0625 14:56:54.665662 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.665764 kubelet[3039]: E0625 14:56:54.665674 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.666077 kubelet[3039]: E0625 14:56:54.665944 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.666077 kubelet[3039]: W0625 14:56:54.665953 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.666077 kubelet[3039]: E0625 14:56:54.665968 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.666365 kubelet[3039]: E0625 14:56:54.666245 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.666365 kubelet[3039]: W0625 14:56:54.666255 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.666365 kubelet[3039]: E0625 14:56:54.666267 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.666647 kubelet[3039]: E0625 14:56:54.666537 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.666647 kubelet[3039]: W0625 14:56:54.666547 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.666647 kubelet[3039]: E0625 14:56:54.666561 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.666918 kubelet[3039]: E0625 14:56:54.666813 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.666918 kubelet[3039]: W0625 14:56:54.666822 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.666918 kubelet[3039]: E0625 14:56:54.666834 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.667218 kubelet[3039]: E0625 14:56:54.667096 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.667218 kubelet[3039]: W0625 14:56:54.667106 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.667218 kubelet[3039]: E0625 14:56:54.667118 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.660000 audit[3581]: NETFILTER_CFG table=nat:99 family=2 entries=19 op=nft_register_chain pid=3581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:56:54.667779 kubelet[3039]: E0625 14:56:54.667675 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.667779 kubelet[3039]: W0625 14:56:54.667686 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.667779 kubelet[3039]: E0625 14:56:54.667699 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.660000 audit[3581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffdc0ca580 a2=0 a3=1 items=0 ppid=3220 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:56:54.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:56:54.668479 kubelet[3039]: E0625 14:56:54.667942 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.668479 kubelet[3039]: W0625 14:56:54.667953 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.668479 kubelet[3039]: E0625 14:56:54.667965 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.668690 kubelet[3039]: E0625 14:56:54.668600 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.668690 kubelet[3039]: W0625 14:56:54.668612 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.668690 kubelet[3039]: E0625 14:56:54.668625 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.670506 kubelet[3039]: E0625 14:56:54.670398 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.670506 kubelet[3039]: W0625 14:56:54.670409 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.670506 kubelet[3039]: E0625 14:56:54.670422 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.670732 kubelet[3039]: E0625 14:56:54.670722 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.670895 kubelet[3039]: W0625 14:56:54.670798 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.670895 kubelet[3039]: E0625 14:56:54.670816 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.671090 kubelet[3039]: E0625 14:56:54.671080 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.671154 kubelet[3039]: W0625 14:56:54.671143 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.671211 kubelet[3039]: E0625 14:56:54.671201 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.671471 kubelet[3039]: E0625 14:56:54.671460 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.671572 kubelet[3039]: W0625 14:56:54.671560 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.671633 kubelet[3039]: E0625 14:56:54.671624 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.672045 kubelet[3039]: E0625 14:56:54.672022 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.672045 kubelet[3039]: W0625 14:56:54.672039 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.672154 kubelet[3039]: E0625 14:56:54.672062 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.672262 kubelet[3039]: E0625 14:56:54.672240 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.672262 kubelet[3039]: W0625 14:56:54.672255 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.672387 kubelet[3039]: E0625 14:56:54.672267 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.672565 kubelet[3039]: E0625 14:56:54.672546 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.672565 kubelet[3039]: W0625 14:56:54.672561 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.672651 kubelet[3039]: E0625 14:56:54.672579 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.672762 kubelet[3039]: E0625 14:56:54.672746 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.672762 kubelet[3039]: W0625 14:56:54.672759 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.672833 kubelet[3039]: E0625 14:56:54.672771 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.673027 kubelet[3039]: E0625 14:56:54.673016 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.673096 kubelet[3039]: W0625 14:56:54.673085 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.673165 kubelet[3039]: E0625 14:56:54.673156 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.673420 kubelet[3039]: E0625 14:56:54.673409 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.673549 kubelet[3039]: W0625 14:56:54.673536 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.673642 kubelet[3039]: E0625 14:56:54.673620 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.673889 kubelet[3039]: E0625 14:56:54.673870 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.673889 kubelet[3039]: W0625 14:56:54.673885 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.673977 kubelet[3039]: E0625 14:56:54.673909 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.674088 kubelet[3039]: E0625 14:56:54.674071 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.674088 kubelet[3039]: W0625 14:56:54.674084 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.674167 kubelet[3039]: E0625 14:56:54.674102 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.674396 kubelet[3039]: E0625 14:56:54.674371 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.674396 kubelet[3039]: W0625 14:56:54.674394 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.674491 kubelet[3039]: E0625 14:56:54.674413 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.674776 kubelet[3039]: E0625 14:56:54.674757 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.674776 kubelet[3039]: W0625 14:56:54.674772 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.674880 kubelet[3039]: E0625 14:56:54.674790 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.674973 kubelet[3039]: E0625 14:56:54.674958 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.674973 kubelet[3039]: W0625 14:56:54.674970 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.675037 kubelet[3039]: E0625 14:56:54.674981 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.675190 kubelet[3039]: E0625 14:56:54.675172 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.675190 kubelet[3039]: W0625 14:56:54.675186 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.675273 kubelet[3039]: E0625 14:56:54.675198 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.675592 kubelet[3039]: E0625 14:56:54.675384 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.675592 kubelet[3039]: W0625 14:56:54.675396 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.675592 kubelet[3039]: E0625 14:56:54.675407 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:54.675840 kubelet[3039]: E0625 14:56:54.675821 3039 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:56:54.675887 kubelet[3039]: W0625 14:56:54.675841 3039 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:56:54.675887 kubelet[3039]: E0625 14:56:54.675854 3039 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:56:55.078415 update_engine[1580]: I0625 14:56:55.078366 1580 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 25 14:56:55.078415 update_engine[1580]: I0625 14:56:55.078406 1580 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 25 14:56:55.078877 update_engine[1580]: I0625 14:56:55.078574 1580 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 25 14:56:55.078950 update_engine[1580]: I0625 14:56:55.078923 1580 omaha_request_params.cc:62] Current group set to stable Jun 25 14:56:55.079147 update_engine[1580]: I0625 14:56:55.079015 1580 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 25 14:56:55.079147 update_engine[1580]: I0625 14:56:55.079025 1580 update_attempter.cc:643] Scheduling an action processor start. Jun 25 14:56:55.079147 update_engine[1580]: I0625 14:56:55.079039 1580 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 14:56:55.079147 update_engine[1580]: I0625 14:56:55.079070 1580 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 25 14:56:55.079147 update_engine[1580]: I0625 14:56:55.079118 1580 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 14:56:55.079147 update_engine[1580]: I0625 14:56:55.079123 1580 omaha_request_action.cc:272] Request: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: Jun 25 14:56:55.079147 update_engine[1580]: I0625 14:56:55.079128 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:56:55.079745 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 25 14:56:55.080116 update_engine[1580]: I0625 14:56:55.080091 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:56:55.080338 update_engine[1580]: I0625 14:56:55.080314 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:56:55.086894 update_engine[1580]: E0625 14:56:55.086854 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:56:55.086989 update_engine[1580]: I0625 14:56:55.086977 1580 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 25 14:56:55.386940 containerd[1604]: time="2024-06-25T14:56:55.386436899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:55.389812 containerd[1604]: time="2024-06-25T14:56:55.389770551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:56:55.394642 containerd[1604]: time="2024-06-25T14:56:55.394609507Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:55.399080 containerd[1604]: time="2024-06-25T14:56:55.399051177Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:55.405648 containerd[1604]: time="2024-06-25T14:56:55.405608800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:56:55.407332 containerd[1604]: time="2024-06-25T14:56:55.407156384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.253568142s" Jun 25 14:56:55.407332 containerd[1604]: time="2024-06-25T14:56:55.407218345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:56:55.411011 containerd[1604]: time="2024-06-25T14:56:55.410976724Z" level=info msg="CreateContainer within sandbox \"d2d3f9247802889e797ed4e9d7781f4d797bb2353c2e445a6dfeeea6776353d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:56:55.439795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount116162032.mount: Deactivated successfully. Jun 25 14:56:55.459573 containerd[1604]: time="2024-06-25T14:56:55.459520167Z" level=info msg="CreateContainer within sandbox \"d2d3f9247802889e797ed4e9d7781f4d797bb2353c2e445a6dfeeea6776353d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d18a7d32fda818afaf5a41ac3c9047724aeafe5fad4c8e6a1044c4bdf49f1044\"" Jun 25 14:56:55.460641 containerd[1604]: time="2024-06-25T14:56:55.460618704Z" level=info msg="StartContainer for \"d18a7d32fda818afaf5a41ac3c9047724aeafe5fad4c8e6a1044c4bdf49f1044\"" Jun 25 14:56:55.509352 kubelet[3039]: E0625 14:56:55.508306 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:56:55.520452 containerd[1604]: time="2024-06-25T14:56:55.520403204Z" level=info msg="StartContainer for \"d18a7d32fda818afaf5a41ac3c9047724aeafe5fad4c8e6a1044c4bdf49f1044\" returns successfully" Jun 25 14:56:56.159607 systemd[1]: run-containerd-runc-k8s.io-d18a7d32fda818afaf5a41ac3c9047724aeafe5fad4c8e6a1044c4bdf49f1044-runc.t5njFD.mount: Deactivated successfully. Jun 25 14:56:56.159745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d18a7d32fda818afaf5a41ac3c9047724aeafe5fad4c8e6a1044c4bdf49f1044-rootfs.mount: Deactivated successfully. Jun 25 14:56:56.458156 containerd[1604]: time="2024-06-25T14:56:56.457996132Z" level=info msg="shim disconnected" id=d18a7d32fda818afaf5a41ac3c9047724aeafe5fad4c8e6a1044c4bdf49f1044 namespace=k8s.io Jun 25 14:56:56.458597 containerd[1604]: time="2024-06-25T14:56:56.458574061Z" level=warning msg="cleaning up after shim disconnected" id=d18a7d32fda818afaf5a41ac3c9047724aeafe5fad4c8e6a1044c4bdf49f1044 namespace=k8s.io Jun 25 14:56:56.458663 containerd[1604]: time="2024-06-25T14:56:56.458649342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:56:56.621192 containerd[1604]: time="2024-06-25T14:56:56.621143986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:56:57.507158 kubelet[3039]: E0625 14:56:57.507127 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:56:59.506402 kubelet[3039]: E0625 14:56:59.506367 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:01.506836 kubelet[3039]: E0625 14:57:01.506796 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:03.507335 kubelet[3039]: E0625 14:57:03.507268 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:03.800021 containerd[1604]: time="2024-06-25T14:57:03.799910539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:03.804167 containerd[1604]: time="2024-06-25T14:57:03.804125399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:57:03.848218 containerd[1604]: time="2024-06-25T14:57:03.848178193Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:03.894794 containerd[1604]: time="2024-06-25T14:57:03.894745382Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:03.941689 containerd[1604]: time="2024-06-25T14:57:03.941648536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:03.943197 containerd[1604]: time="2024-06-25T14:57:03.943164798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 7.321763609s" Jun 25 14:57:03.943255 containerd[1604]: time="2024-06-25T14:57:03.943201559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:57:03.946365 containerd[1604]: time="2024-06-25T14:57:03.946330244Z" level=info msg="CreateContainer within sandbox \"d2d3f9247802889e797ed4e9d7781f4d797bb2353c2e445a6dfeeea6776353d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:57:04.152117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454922660.mount: Deactivated successfully. Jun 25 14:57:04.245100 containerd[1604]: time="2024-06-25T14:57:04.245053463Z" level=info msg="CreateContainer within sandbox \"d2d3f9247802889e797ed4e9d7781f4d797bb2353c2e445a6dfeeea6776353d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038\"" Jun 25 14:57:04.245468 containerd[1604]: time="2024-06-25T14:57:04.245441349Z" level=info msg="StartContainer for \"64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038\"" Jun 25 14:57:04.306567 containerd[1604]: time="2024-06-25T14:57:04.306519938Z" level=info msg="StartContainer for \"64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038\" returns successfully" Jun 25 14:57:05.053795 update_engine[1580]: I0625 14:57:05.053748 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:57:05.054158 update_engine[1580]: I0625 14:57:05.053961 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:57:05.054158 update_engine[1580]: I0625 14:57:05.054131 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:57:05.149057 systemd[1]: run-containerd-runc-k8s.io-64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038-runc.AG1oMQ.mount: Deactivated successfully. Jun 25 14:57:05.168406 update_engine[1580]: E0625 14:57:05.168371 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:57:05.168543 update_engine[1580]: I0625 14:57:05.168511 1580 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 25 14:57:05.506577 kubelet[3039]: E0625 14:57:05.506548 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:07.506299 kubelet[3039]: E0625 14:57:07.506253 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:09.507179 kubelet[3039]: E0625 14:57:09.507141 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:10.258997 containerd[1604]: time="2024-06-25T14:57:10.258940361Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:57:10.278688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038-rootfs.mount: Deactivated successfully. Jun 25 14:57:10.287714 kubelet[3039]: I0625 14:57:10.284475 3039 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 14:57:10.304903 kubelet[3039]: I0625 14:57:10.304852 3039 topology_manager.go:215] "Topology Admit Handler" podUID="2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae" podNamespace="kube-system" podName="coredns-5dd5756b68-r4gbf" Jun 25 14:57:10.314273 kubelet[3039]: I0625 14:57:10.314234 3039 topology_manager.go:215] "Topology Admit Handler" podUID="82c03bfc-edfe-48b6-9a13-261014350513" podNamespace="kube-system" podName="coredns-5dd5756b68-45cwv" Jun 25 14:57:10.314432 kubelet[3039]: I0625 14:57:10.314414 3039 topology_manager.go:215] "Topology Admit Handler" podUID="a5766523-eff6-42b0-8a56-15ca01f02ba3" podNamespace="calico-system" podName="calico-kube-controllers-795866d7b8-cmg8b" Jun 25 14:57:10.359943 kubelet[3039]: I0625 14:57:10.359901 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfsh\" (UniqueName: \"kubernetes.io/projected/82c03bfc-edfe-48b6-9a13-261014350513-kube-api-access-pbfsh\") pod \"coredns-5dd5756b68-45cwv\" (UID: \"82c03bfc-edfe-48b6-9a13-261014350513\") " pod="kube-system/coredns-5dd5756b68-45cwv" Jun 25 14:57:10.359943 kubelet[3039]: I0625 14:57:10.359953 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twdwk\" (UniqueName: \"kubernetes.io/projected/2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae-kube-api-access-twdwk\") pod \"coredns-5dd5756b68-r4gbf\" (UID: \"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae\") " pod="kube-system/coredns-5dd5756b68-r4gbf" Jun 25 14:57:10.360129 kubelet[3039]: I0625 14:57:10.359980 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82c03bfc-edfe-48b6-9a13-261014350513-config-volume\") pod \"coredns-5dd5756b68-45cwv\" (UID: \"82c03bfc-edfe-48b6-9a13-261014350513\") " pod="kube-system/coredns-5dd5756b68-45cwv" Jun 25 14:57:10.360129 kubelet[3039]: I0625 14:57:10.360005 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae-config-volume\") pod \"coredns-5dd5756b68-r4gbf\" (UID: \"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae\") " pod="kube-system/coredns-5dd5756b68-r4gbf" Jun 25 14:57:10.360129 kubelet[3039]: I0625 14:57:10.360027 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5766523-eff6-42b0-8a56-15ca01f02ba3-tigera-ca-bundle\") pod \"calico-kube-controllers-795866d7b8-cmg8b\" (UID: \"a5766523-eff6-42b0-8a56-15ca01f02ba3\") " pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" Jun 25 14:57:10.360129 kubelet[3039]: I0625 14:57:10.360051 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmpv\" (UniqueName: \"kubernetes.io/projected/a5766523-eff6-42b0-8a56-15ca01f02ba3-kube-api-access-wmmpv\") pod \"calico-kube-controllers-795866d7b8-cmg8b\" (UID: \"a5766523-eff6-42b0-8a56-15ca01f02ba3\") " pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" Jun 25 14:57:10.610065 containerd[1604]: time="2024-06-25T14:57:10.609877443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r4gbf,Uid:2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae,Namespace:kube-system,Attempt:0,}" Jun 25 14:57:10.619538 containerd[1604]: time="2024-06-25T14:57:10.619493332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795866d7b8-cmg8b,Uid:a5766523-eff6-42b0-8a56-15ca01f02ba3,Namespace:calico-system,Attempt:0,}" Jun 25 14:57:10.624528 containerd[1604]: time="2024-06-25T14:57:10.624484839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45cwv,Uid:82c03bfc-edfe-48b6-9a13-261014350513,Namespace:kube-system,Attempt:0,}" Jun 25 14:57:11.508961 containerd[1604]: time="2024-06-25T14:57:11.508915801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rz48,Uid:7813833d-a993-419c-9c27-7d6c8ce9f5ba,Namespace:calico-system,Attempt:0,}" Jun 25 14:57:14.729198 containerd[1604]: time="2024-06-25T14:57:14.729055947Z" level=error msg="collecting metrics for 64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038" error="cgroups: cgroup deleted: unknown" Jun 25 14:57:15.843192 update_engine[1580]: I0625 14:57:15.059625 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:57:15.843192 update_engine[1580]: I0625 14:57:15.059847 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:57:15.843192 update_engine[1580]: I0625 14:57:15.060020 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:57:15.843192 update_engine[1580]: E0625 14:57:15.096578 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:57:15.843192 update_engine[1580]: I0625 14:57:15.096693 1580 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 25 14:57:15.857990 containerd[1604]: time="2024-06-25T14:57:15.857930751Z" level=info msg="shim disconnected" id=64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038 namespace=k8s.io Jun 25 14:57:15.858399 containerd[1604]: time="2024-06-25T14:57:15.858378797Z" level=warning msg="cleaning up after shim disconnected" id=64d0ac77c38ef6f433826397096b5789136d87fc09727f2a1eea68c45c187038 namespace=k8s.io Jun 25 14:57:15.858467 containerd[1604]: time="2024-06-25T14:57:15.858453718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:57:16.063282 containerd[1604]: time="2024-06-25T14:57:16.063227035Z" level=error msg="Failed to destroy network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.063812 containerd[1604]: time="2024-06-25T14:57:16.063776642Z" level=error msg="encountered an error cleaning up failed sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.064131 containerd[1604]: time="2024-06-25T14:57:16.064098367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rz48,Uid:7813833d-a993-419c-9c27-7d6c8ce9f5ba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.064691 kubelet[3039]: E0625 14:57:16.064506 3039 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.064691 kubelet[3039]: E0625 14:57:16.064577 3039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rz48" Jun 25 14:57:16.064691 kubelet[3039]: E0625 14:57:16.064598 3039 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6rz48" Jun 25 14:57:16.065089 kubelet[3039]: E0625 14:57:16.064663 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6rz48_calico-system(7813833d-a993-419c-9c27-7d6c8ce9f5ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6rz48_calico-system(7813833d-a993-419c-9c27-7d6c8ce9f5ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:16.071655 containerd[1604]: time="2024-06-25T14:57:16.071597823Z" level=error msg="Failed to destroy network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.072018 containerd[1604]: time="2024-06-25T14:57:16.071968987Z" level=error msg="encountered an error cleaning up failed sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.072081 containerd[1604]: time="2024-06-25T14:57:16.072035348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795866d7b8-cmg8b,Uid:a5766523-eff6-42b0-8a56-15ca01f02ba3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.072303 kubelet[3039]: E0625 14:57:16.072259 3039 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.072379 kubelet[3039]: E0625 14:57:16.072344 3039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" Jun 25 14:57:16.072379 kubelet[3039]: E0625 14:57:16.072370 3039 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" Jun 25 14:57:16.072454 kubelet[3039]: E0625 14:57:16.072424 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-795866d7b8-cmg8b_calico-system(a5766523-eff6-42b0-8a56-15ca01f02ba3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-795866d7b8-cmg8b_calico-system(a5766523-eff6-42b0-8a56-15ca01f02ba3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" podUID="a5766523-eff6-42b0-8a56-15ca01f02ba3" Jun 25 14:57:16.091824 containerd[1604]: time="2024-06-25T14:57:16.091759201Z" level=error msg="Failed to destroy network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.092513 containerd[1604]: time="2024-06-25T14:57:16.092472770Z" level=error msg="encountered an error cleaning up failed sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.092679 containerd[1604]: time="2024-06-25T14:57:16.092649852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r4gbf,Uid:2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.093045 kubelet[3039]: E0625 14:57:16.093011 3039 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.093130 kubelet[3039]: E0625 14:57:16.093085 3039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-r4gbf" Jun 25 14:57:16.093130 kubelet[3039]: E0625 14:57:16.093111 3039 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-r4gbf" Jun 25 14:57:16.094091 kubelet[3039]: E0625 14:57:16.093169 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-r4gbf_kube-system(2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-r4gbf_kube-system(2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-r4gbf" podUID="2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae" Jun 25 14:57:16.103370 containerd[1604]: time="2024-06-25T14:57:16.103315669Z" level=error msg="Failed to destroy network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.103852 containerd[1604]: time="2024-06-25T14:57:16.103818076Z" level=error msg="encountered an error cleaning up failed sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.103982 containerd[1604]: time="2024-06-25T14:57:16.103956917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45cwv,Uid:82c03bfc-edfe-48b6-9a13-261014350513,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.104322 kubelet[3039]: E0625 14:57:16.104277 3039 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.104424 kubelet[3039]: E0625 14:57:16.104350 3039 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-45cwv" Jun 25 14:57:16.104424 kubelet[3039]: E0625 14:57:16.104374 3039 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-45cwv" Jun 25 14:57:16.104485 kubelet[3039]: E0625 14:57:16.104430 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-45cwv_kube-system(82c03bfc-edfe-48b6-9a13-261014350513)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-45cwv_kube-system(82c03bfc-edfe-48b6-9a13-261014350513)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-45cwv" podUID="82c03bfc-edfe-48b6-9a13-261014350513" Jun 25 14:57:16.653930 kubelet[3039]: I0625 14:57:16.653894 3039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:57:16.656052 containerd[1604]: time="2024-06-25T14:57:16.654783735Z" level=info msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\"" Jun 25 14:57:16.656052 containerd[1604]: time="2024-06-25T14:57:16.655026098Z" level=info msg="Ensure that sandbox 3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56 in task-service has been cleanup successfully" Jun 25 14:57:16.658546 kubelet[3039]: I0625 14:57:16.657847 3039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:57:16.659321 containerd[1604]: time="2024-06-25T14:57:16.659237872Z" level=info msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\"" Jun 25 14:57:16.659620 containerd[1604]: time="2024-06-25T14:57:16.659575837Z" level=info msg="Ensure that sandbox 33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1 in task-service has been cleanup successfully" Jun 25 14:57:16.666963 containerd[1604]: time="2024-06-25T14:57:16.666906571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:57:16.670809 kubelet[3039]: I0625 14:57:16.670451 3039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:57:16.671078 containerd[1604]: time="2024-06-25T14:57:16.671052504Z" level=info msg="StopPodSandbox for \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\"" Jun 25 14:57:16.671417 containerd[1604]: time="2024-06-25T14:57:16.671388148Z" level=info msg="Ensure that sandbox 0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7 in task-service has been cleanup successfully" Jun 25 14:57:16.673324 kubelet[3039]: I0625 14:57:16.673065 3039 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:57:16.673668 containerd[1604]: time="2024-06-25T14:57:16.673646257Z" level=info msg="StopPodSandbox for \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\"" Jun 25 14:57:16.673951 containerd[1604]: time="2024-06-25T14:57:16.673929301Z" level=info msg="Ensure that sandbox 410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0 in task-service has been cleanup successfully" Jun 25 14:57:16.726171 containerd[1604]: time="2024-06-25T14:57:16.726117049Z" level=error msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" failed" error="failed to destroy network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.726955 kubelet[3039]: E0625 14:57:16.726743 3039 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:57:16.726955 kubelet[3039]: E0625 14:57:16.726824 3039 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56"} Jun 25 14:57:16.726955 kubelet[3039]: E0625 14:57:16.726875 3039 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:57:16.726955 kubelet[3039]: E0625 14:57:16.726906 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:16.799082 containerd[1604]: time="2024-06-25T14:57:16.799013063Z" level=error msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" failed" error="failed to destroy network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.799664 kubelet[3039]: E0625 14:57:16.799472 3039 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:57:16.799664 kubelet[3039]: E0625 14:57:16.799552 3039 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1"} Jun 25 14:57:16.799664 kubelet[3039]: E0625 14:57:16.799602 3039 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5766523-eff6-42b0-8a56-15ca01f02ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:57:16.799664 kubelet[3039]: E0625 14:57:16.799634 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5766523-eff6-42b0-8a56-15ca01f02ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" podUID="a5766523-eff6-42b0-8a56-15ca01f02ba3" Jun 25 14:57:16.809962 containerd[1604]: time="2024-06-25T14:57:16.809898043Z" level=error msg="StopPodSandbox for \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\" failed" error="failed to destroy network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.810487 kubelet[3039]: E0625 14:57:16.810320 3039 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:57:16.810487 kubelet[3039]: E0625 14:57:16.810365 3039 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7"} Jun 25 14:57:16.810487 kubelet[3039]: E0625 14:57:16.810409 3039 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82c03bfc-edfe-48b6-9a13-261014350513\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:57:16.810487 kubelet[3039]: E0625 14:57:16.810442 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82c03bfc-edfe-48b6-9a13-261014350513\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-45cwv" podUID="82c03bfc-edfe-48b6-9a13-261014350513" Jun 25 14:57:16.811886 containerd[1604]: time="2024-06-25T14:57:16.811832548Z" level=error msg="StopPodSandbox for \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\" failed" error="failed to destroy network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:16.812079 kubelet[3039]: E0625 14:57:16.812055 3039 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:57:16.812143 kubelet[3039]: E0625 14:57:16.812092 3039 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0"} Jun 25 14:57:16.812181 kubelet[3039]: E0625 14:57:16.812149 3039 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:57:16.812181 kubelet[3039]: E0625 14:57:16.812179 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-r4gbf" podUID="2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae" Jun 25 14:57:16.931738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1-shm.mount: Deactivated successfully. Jun 25 14:57:16.931865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0-shm.mount: Deactivated successfully. Jun 25 14:57:16.931957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56-shm.mount: Deactivated successfully. Jun 25 14:57:25.062776 update_engine[1580]: I0625 14:57:25.062693 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:57:25.063098 update_engine[1580]: I0625 14:57:25.063002 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:57:25.063211 update_engine[1580]: I0625 14:57:25.063185 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:57:25.085517 update_engine[1580]: E0625 14:57:25.085386 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:57:25.085709 update_engine[1580]: I0625 14:57:25.085549 1580 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 14:57:25.085709 update_engine[1580]: I0625 14:57:25.085572 1580 omaha_request_action.cc:617] Omaha request response: Jun 25 14:57:25.085709 update_engine[1580]: E0625 14:57:25.085621 1580 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 25 14:57:25.085709 update_engine[1580]: I0625 14:57:25.085638 1580 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085716 1580 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085722 1580 update_attempter.cc:306] Processing Done. Jun 25 14:57:25.085909 update_engine[1580]: E0625 14:57:25.085736 1580 update_attempter.cc:619] Update failed. Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085742 1580 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085744 1580 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085748 1580 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085808 1580 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085822 1580 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085825 1580 omaha_request_action.cc:272] Request: Jun 25 14:57:25.085909 update_engine[1580]: Jun 25 14:57:25.085909 update_engine[1580]: Jun 25 14:57:25.085909 update_engine[1580]: Jun 25 14:57:25.085909 update_engine[1580]: Jun 25 14:57:25.085909 update_engine[1580]: Jun 25 14:57:25.085909 update_engine[1580]: Jun 25 14:57:25.085909 update_engine[1580]: I0625 14:57:25.085829 1580 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 14:57:25.086643 update_engine[1580]: I0625 14:57:25.086624 1580 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 14:57:25.087084 update_engine[1580]: I0625 14:57:25.086787 1580 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 14:57:25.087484 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 25 14:57:25.192619 update_engine[1580]: E0625 14:57:25.192559 1580 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 14:57:25.192817 update_engine[1580]: I0625 14:57:25.192795 1580 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 14:57:25.192817 update_engine[1580]: I0625 14:57:25.192811 1580 omaha_request_action.cc:617] Omaha request response: Jun 25 14:57:25.192817 update_engine[1580]: I0625 14:57:25.192816 1580 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 14:57:25.192817 update_engine[1580]: I0625 14:57:25.192818 1580 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 14:57:25.192956 update_engine[1580]: I0625 14:57:25.192821 1580 update_attempter.cc:306] Processing Done. Jun 25 14:57:25.192956 update_engine[1580]: I0625 14:57:25.192827 1580 update_attempter.cc:310] Error event sent. Jun 25 14:57:25.192956 update_engine[1580]: I0625 14:57:25.192836 1580 update_check_scheduler.cc:74] Next update check in 45m15s Jun 25 14:57:25.195479 locksmithd[1645]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 25 14:57:25.236657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469877928.mount: Deactivated successfully. Jun 25 14:57:27.507270 containerd[1604]: time="2024-06-25T14:57:27.507203657Z" level=info msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\"" Jun 25 14:57:27.507664 containerd[1604]: time="2024-06-25T14:57:27.507236657Z" level=info msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\"" Jun 25 14:57:27.537524 containerd[1604]: time="2024-06-25T14:57:27.537466738Z" level=error msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" failed" error="failed to destroy network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:27.538018 kubelet[3039]: E0625 14:57:27.537877 3039 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:57:27.538018 kubelet[3039]: E0625 14:57:27.537920 3039 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1"} Jun 25 14:57:27.538018 kubelet[3039]: E0625 14:57:27.537957 3039 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5766523-eff6-42b0-8a56-15ca01f02ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:57:27.538018 kubelet[3039]: E0625 14:57:27.537986 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5766523-eff6-42b0-8a56-15ca01f02ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" podUID="a5766523-eff6-42b0-8a56-15ca01f02ba3" Jun 25 14:57:27.546688 containerd[1604]: time="2024-06-25T14:57:27.546622967Z" level=error msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" failed" error="failed to destroy network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:57:27.547079 kubelet[3039]: E0625 14:57:27.546939 3039 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:57:27.547079 kubelet[3039]: E0625 14:57:27.546986 3039 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56"} Jun 25 14:57:27.547079 kubelet[3039]: E0625 14:57:27.547028 3039 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:57:27.547079 kubelet[3039]: E0625 14:57:27.547058 3039 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7813833d-a993-419c-9c27-7d6c8ce9f5ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6rz48" podUID="7813833d-a993-419c-9c27-7d6c8ce9f5ba" Jun 25 14:57:29.195305 containerd[1604]: time="2024-06-25T14:57:29.195249430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:29.242693 containerd[1604]: time="2024-06-25T14:57:29.242640229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:57:29.290942 containerd[1604]: time="2024-06-25T14:57:29.290896357Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:29.336352 containerd[1604]: time="2024-06-25T14:57:29.336315732Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:29.399077 containerd[1604]: time="2024-06-25T14:57:29.399029351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:29.400004 containerd[1604]: time="2024-06-25T14:57:29.399957361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 12.732863708s" Jun 25 14:57:29.400115 containerd[1604]: time="2024-06-25T14:57:29.400095683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:57:29.414088 containerd[1604]: time="2024-06-25T14:57:29.414051287Z" level=info msg="CreateContainer within sandbox \"d2d3f9247802889e797ed4e9d7781f4d797bb2353c2e445a6dfeeea6776353d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:57:29.594559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240403996.mount: Deactivated successfully. Jun 25 14:57:29.701692 containerd[1604]: time="2024-06-25T14:57:29.701651595Z" level=info msg="CreateContainer within sandbox \"d2d3f9247802889e797ed4e9d7781f4d797bb2353c2e445a6dfeeea6776353d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203\"" Jun 25 14:57:29.703786 containerd[1604]: time="2024-06-25T14:57:29.703755340Z" level=info msg="StartContainer for \"d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203\"" Jun 25 14:57:29.754911 containerd[1604]: time="2024-06-25T14:57:29.754848901Z" level=info msg="StartContainer for \"d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203\" returns successfully" Jun 25 14:57:29.863863 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:57:29.864002 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:57:30.730784 systemd[1]: run-containerd-runc-k8s.io-d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203-runc.HycHFV.mount: Deactivated successfully. Jun 25 14:57:31.278000 audit[4159]: AVC avc: denied { write } for pid=4159 comm="tee" name="fd" dev="proc" ino=26323 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.283470 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 14:57:31.283574 kernel: audit: type=1400 audit(1719327451.278:283): avc: denied { write } for pid=4159 comm="tee" name="fd" dev="proc" ino=26323 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.278000 audit[4159]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff95969f8 a2=241 a3=1b6 items=1 ppid=4124 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.319000 audit[4175]: AVC avc: denied { write } for pid=4175 comm="tee" name="fd" dev="proc" ino=27095 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.364325 kernel: audit: type=1300 audit(1719327451.278:283): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff95969f8 a2=241 a3=1b6 items=1 ppid=4124 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.364459 kernel: audit: type=1400 audit(1719327451.319:284): avc: denied { write } for pid=4175 comm="tee" name="fd" dev="proc" ino=27095 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.319000 audit[4175]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff949aa08 a2=241 a3=1b6 items=1 ppid=4112 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.398940 kernel: audit: type=1300 audit(1719327451.319:284): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff949aa08 a2=241 a3=1b6 items=1 ppid=4112 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.319000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:57:31.319000 audit: PATH item=0 name="/dev/fd/63" inode=27092 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.429594 kernel: audit: type=1307 audit(1719327451.319:284): cwd="/etc/service/enabled/felix/log" Jun 25 14:57:31.429683 kernel: audit: type=1302 audit(1719327451.319:284): item=0 name="/dev/fd/63" inode=27092 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.278000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:57:31.447363 kernel: audit: type=1327 audit(1719327451.319:284): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.278000 audit: PATH item=0 name="/dev/fd/63" inode=26310 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.472542 kernel: audit: type=1307 audit(1719327451.278:283): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:57:31.472639 kernel: audit: type=1302 audit(1719327451.278:283): item=0 name="/dev/fd/63" inode=26310 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.489226 kernel: audit: type=1327 audit(1719327451.278:283): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.324000 audit[4173]: AVC avc: denied { write } for pid=4173 comm="tee" name="fd" dev="proc" ino=27099 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.324000 audit[4173]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffbffc9f9 a2=241 a3=1b6 items=1 ppid=4121 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.324000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:57:31.324000 audit: PATH item=0 name="/dev/fd/63" inode=26325 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.345000 audit[4171]: AVC avc: denied { write } for pid=4171 comm="tee" name="fd" dev="proc" ino=26337 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.345000 audit[4171]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffe95ea09 a2=241 a3=1b6 items=1 ppid=4126 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.345000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:57:31.345000 audit: PATH item=0 name="/dev/fd/63" inode=26324 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.346000 audit[4179]: AVC avc: denied { write } for pid=4179 comm="tee" name="fd" dev="proc" ino=26341 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.346000 audit[4179]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffb086a08 a2=241 a3=1b6 items=1 ppid=4120 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.346000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:57:31.346000 audit: PATH item=0 name="/dev/fd/63" inode=26333 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.380000 audit[4182]: AVC avc: denied { write } for pid=4182 comm="tee" name="fd" dev="proc" ino=27123 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.380000 audit[4182]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd377ba08 a2=241 a3=1b6 items=1 ppid=4114 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.380000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:57:31.380000 audit: PATH item=0 name="/dev/fd/63" inode=26334 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.398000 audit[4188]: AVC avc: denied { write } for pid=4188 comm="tee" name="fd" dev="proc" ino=27128 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:57:31.398000 audit[4188]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff4d8ea0a a2=241 a3=1b6 items=1 ppid=4116 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.398000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:57:31.398000 audit: PATH item=0 name="/dev/fd/63" inode=27125 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:57:31.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:57:31.507422 containerd[1604]: time="2024-06-25T14:57:31.507375177Z" level=info msg="StopPodSandbox for \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\"" Jun 25 14:57:31.507772 containerd[1604]: time="2024-06-25T14:57:31.507735501Z" level=info msg="StopPodSandbox for \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\"" Jun 25 14:57:31.610752 kubelet[3039]: I0625 14:57:31.610638 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-qsl6f" podStartSLOduration=3.292247415 podCreationTimestamp="2024-06-25 14:56:51 +0000 UTC" firstStartedPulling="2024-06-25 14:56:52.082142723 +0000 UTC m=+27.693333560" lastFinishedPulling="2024-06-25 14:57:29.400492048 +0000 UTC m=+65.011682925" observedRunningTime="2024-06-25 14:57:30.720473109 +0000 UTC m=+66.331663946" watchObservedRunningTime="2024-06-25 14:57:31.61059678 +0000 UTC m=+67.221787657" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.613 [INFO][4221] k8s.go 608: Cleaning up netns ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.613 [INFO][4221] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" iface="eth0" netns="/var/run/netns/cni-d88b19ee-b297-b678-2e04-c8fa2729a529" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.613 [INFO][4221] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" iface="eth0" netns="/var/run/netns/cni-d88b19ee-b297-b678-2e04-c8fa2729a529" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.613 [INFO][4221] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" iface="eth0" netns="/var/run/netns/cni-d88b19ee-b297-b678-2e04-c8fa2729a529" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.613 [INFO][4221] k8s.go 615: Releasing IP address(es) ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.613 [INFO][4221] utils.go 188: Calico CNI releasing IP address ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.683 [INFO][4234] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.683 [INFO][4234] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.683 [INFO][4234] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.694 [WARNING][4234] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.694 [INFO][4234] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.695 [INFO][4234] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:31.704376 containerd[1604]: 2024-06-25 14:57:31.701 [INFO][4221] k8s.go 621: Teardown processing complete. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:57:31.709731 systemd[1]: run-netns-cni\x2dd88b19ee\x2db297\x2db678\x2d2e04\x2dc8fa2729a529.mount: Deactivated successfully. Jun 25 14:57:31.712138 containerd[1604]: time="2024-06-25T14:57:31.710981870Z" level=info msg="TearDown network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\" successfully" Jun 25 14:57:31.712138 containerd[1604]: time="2024-06-25T14:57:31.711039070Z" level=info msg="StopPodSandbox for \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\" returns successfully" Jun 25 14:57:31.712519 containerd[1604]: time="2024-06-25T14:57:31.712487047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45cwv,Uid:82c03bfc-edfe-48b6-9a13-261014350513,Namespace:kube-system,Attempt:1,}" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.616 [INFO][4226] k8s.go 608: Cleaning up netns ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.616 [INFO][4226] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" iface="eth0" netns="/var/run/netns/cni-ab949231-c868-b910-7738-450b399df637" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.616 [INFO][4226] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" iface="eth0" netns="/var/run/netns/cni-ab949231-c868-b910-7738-450b399df637" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.616 [INFO][4226] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" iface="eth0" netns="/var/run/netns/cni-ab949231-c868-b910-7738-450b399df637" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.616 [INFO][4226] k8s.go 615: Releasing IP address(es) ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.616 [INFO][4226] utils.go 188: Calico CNI releasing IP address ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.683 [INFO][4238] ipam_plugin.go 411: Releasing address using handleID ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.684 [INFO][4238] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.695 [INFO][4238] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.715 [WARNING][4238] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.715 [INFO][4238] ipam_plugin.go 439: Releasing address using workloadID ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.717 [INFO][4238] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:31.722206 containerd[1604]: 2024-06-25 14:57:31.720 [INFO][4226] k8s.go 621: Teardown processing complete. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:57:31.725089 systemd[1]: run-netns-cni\x2dab949231\x2dc868\x2db910\x2d7738\x2d450b399df637.mount: Deactivated successfully. Jun 25 14:57:31.725312 containerd[1604]: time="2024-06-25T14:57:31.725263596Z" level=info msg="TearDown network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\" successfully" Jun 25 14:57:31.725392 containerd[1604]: time="2024-06-25T14:57:31.725376797Z" level=info msg="StopPodSandbox for \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\" returns successfully" Jun 25 14:57:31.726815 containerd[1604]: time="2024-06-25T14:57:31.726780454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r4gbf,Uid:2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae,Namespace:kube-system,Attempt:1,}" Jun 25 14:57:31.772858 systemd[1]: run-containerd-runc-k8s.io-d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203-runc.1QUCFa.mount: Deactivated successfully. Jun 25 14:57:31.875000 audit: BPF prog-id=10 op=LOAD Jun 25 14:57:31.875000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdba22d58 a2=70 a3=ffffdba22dc8 items=0 ppid=4113 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.875000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:57:31.875000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:57:31.875000 audit: BPF prog-id=11 op=LOAD Jun 25 14:57:31.875000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffdba22d58 a2=70 a3=4b243c items=0 ppid=4113 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.875000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:57:31.875000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:57:31.875000 audit: BPF prog-id=12 op=LOAD Jun 25 14:57:31.875000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffdba22cf8 a2=70 a3=ffffdba22d68 items=0 ppid=4113 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.875000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:57:31.875000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:57:31.876000 audit: BPF prog-id=13 op=LOAD Jun 25 14:57:31.876000 audit[4327]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffdba22d28 a2=70 a3=1b94e4a9 items=0 ppid=4113 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:31.876000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:57:31.892000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:57:34.108927 systemd-networkd[1299]: vxlan.calico: Link UP Jun 25 14:57:34.108937 systemd-networkd[1299]: vxlan.calico: Gained carrier Jun 25 14:57:34.282000 audit[4359]: NETFILTER_CFG table=mangle:100 family=2 entries=16 op=nft_register_chain pid=4359 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:34.282000 audit[4359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcfafee90 a2=0 a3=ffffb0e80fa8 items=0 ppid=4113 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:34.282000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:34.291000 audit[4358]: NETFILTER_CFG table=nat:101 family=2 entries=15 op=nft_register_chain pid=4358 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:34.291000 audit[4358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc47258e0 a2=0 a3=ffff81c84fa8 items=0 ppid=4113 pid=4358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:34.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:34.293000 audit[4360]: NETFILTER_CFG table=filter:102 family=2 entries=39 op=nft_register_chain pid=4360 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:34.293000 audit[4360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffc83da9b0 a2=0 a3=ffff8ce2afa8 items=0 ppid=4113 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:34.293000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:34.293000 audit[4357]: NETFILTER_CFG table=raw:103 family=2 entries=19 op=nft_register_chain pid=4357 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:34.293000 audit[4357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=fffff9e52630 a2=0 a3=ffffb361efa8 items=0 ppid=4113 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:34.293000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:35.219446 systemd-networkd[1299]: vxlan.calico: Gained IPv6LL Jun 25 14:57:36.611476 systemd-networkd[1299]: calid7b03d1bb80: Link UP Jun 25 14:57:36.622716 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:57:36.622840 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid7b03d1bb80: link becomes ready Jun 25 14:57:36.625644 systemd-networkd[1299]: calid7b03d1bb80: Gained carrier Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.468 [INFO][4367] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0 coredns-5dd5756b68- kube-system 82c03bfc-edfe-48b6-9a13-261014350513 746 0 2024-06-25 14:56:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-2c7c8223bb coredns-5dd5756b68-45cwv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7b03d1bb80 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.468 [INFO][4367] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.556 [INFO][4392] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" HandleID="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.578 [INFO][4392] ipam_plugin.go 264: Auto assigning IP ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" HandleID="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000301d40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-2c7c8223bb", "pod":"coredns-5dd5756b68-45cwv", "timestamp":"2024-06-25 14:57:36.544973036 +0000 UTC"}, Hostname:"ci-3815.2.4-a-2c7c8223bb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.578 [INFO][4392] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.578 [INFO][4392] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.578 [INFO][4392] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-2c7c8223bb' Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.580 [INFO][4392] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.583 [INFO][4392] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.588 [INFO][4392] ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.592 [INFO][4392] ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.593 [INFO][4392] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.593 [INFO][4392] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.595 [INFO][4392] ipam.go 1685: Creating new handle: k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736 Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.598 [INFO][4392] ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.605 [INFO][4392] ipam.go 1216: Successfully claimed IPs: [192.168.97.193/26] block=192.168.97.192/26 handle="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.605 [INFO][4392] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.193/26] handle="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.605 [INFO][4392] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:36.645241 containerd[1604]: 2024-06-25 14:57:36.605 [INFO][4392] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.97.193/26] IPv6=[] ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" HandleID="k8s-pod-network.c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:36.646160 containerd[1604]: 2024-06-25 14:57:36.606 [INFO][4367] k8s.go 386: Populated endpoint ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"82c03bfc-edfe-48b6-9a13-261014350513", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"", Pod:"coredns-5dd5756b68-45cwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b03d1bb80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:36.646160 containerd[1604]: 2024-06-25 14:57:36.606 [INFO][4367] k8s.go 387: Calico CNI using IPs: [192.168.97.193/32] ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:36.646160 containerd[1604]: 2024-06-25 14:57:36.606 [INFO][4367] dataplane_linux.go 68: Setting the host side veth name to calid7b03d1bb80 ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:36.646160 containerd[1604]: 2024-06-25 14:57:36.624 [INFO][4367] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:36.646160 containerd[1604]: 2024-06-25 14:57:36.627 [INFO][4367] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"82c03bfc-edfe-48b6-9a13-261014350513", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736", Pod:"coredns-5dd5756b68-45cwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b03d1bb80", MAC:"2e:ad:1f:d7:b1:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:36.646160 containerd[1604]: 2024-06-25 14:57:36.640 [INFO][4367] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736" Namespace="kube-system" Pod="coredns-5dd5756b68-45cwv" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:57:36.658000 audit[4416]: NETFILTER_CFG table=filter:104 family=2 entries=34 op=nft_register_chain pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:36.663632 kernel: kauditd_printk_skb: 53 callbacks suppressed Jun 25 14:57:36.663737 kernel: audit: type=1325 audit(1719327456.658:302): table=filter:104 family=2 entries=34 op=nft_register_chain pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:36.658000 audit[4416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffcb3287a0 a2=0 a3=ffff82c38fa8 items=0 ppid=4113 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:36.703061 kernel: audit: type=1300 audit(1719327456.658:302): arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffcb3287a0 a2=0 a3=ffff82c38fa8 items=0 ppid=4113 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:36.709807 systemd-networkd[1299]: calie0ad7032e40: Link UP Jun 25 14:57:36.658000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:36.716301 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie0ad7032e40: link becomes ready Jun 25 14:57:36.732628 kernel: audit: type=1327 audit(1719327456.658:302): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:36.718865 systemd-networkd[1299]: calie0ad7032e40: Gained carrier Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.483 [INFO][4379] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0 coredns-5dd5756b68- kube-system 2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae 745 0 2024-06-25 14:56:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3815.2.4-a-2c7c8223bb coredns-5dd5756b68-r4gbf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0ad7032e40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.483 [INFO][4379] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.585 [INFO][4397] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" HandleID="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.603 [INFO][4397] ipam_plugin.go 264: Auto assigning IP ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" HandleID="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035c510), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3815.2.4-a-2c7c8223bb", "pod":"coredns-5dd5756b68-r4gbf", "timestamp":"2024-06-25 14:57:36.585913062 +0000 UTC"}, Hostname:"ci-3815.2.4-a-2c7c8223bb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.603 [INFO][4397] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.608 [INFO][4397] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.615 [INFO][4397] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-2c7c8223bb' Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.628 [INFO][4397] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.642 [INFO][4397] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.652 [INFO][4397] ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.653 [INFO][4397] ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.656 [INFO][4397] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.656 [INFO][4397] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.657 [INFO][4397] ipam.go 1685: Creating new handle: k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.677 [INFO][4397] ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.703 [INFO][4397] ipam.go 1216: Successfully claimed IPs: [192.168.97.194/26] block=192.168.97.192/26 handle="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.704 [INFO][4397] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.194/26] handle="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.704 [INFO][4397] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:36.736668 containerd[1604]: 2024-06-25 14:57:36.704 [INFO][4397] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.97.194/26] IPv6=[] ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" HandleID="k8s-pod-network.a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:36.737242 containerd[1604]: 2024-06-25 14:57:36.706 [INFO][4379] k8s.go 386: Populated endpoint ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"", Pod:"coredns-5dd5756b68-r4gbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0ad7032e40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:36.737242 containerd[1604]: 2024-06-25 14:57:36.706 [INFO][4379] k8s.go 387: Calico CNI using IPs: [192.168.97.194/32] ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:36.737242 containerd[1604]: 2024-06-25 14:57:36.706 [INFO][4379] dataplane_linux.go 68: Setting the host side veth name to calie0ad7032e40 ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:36.737242 containerd[1604]: 2024-06-25 14:57:36.720 [INFO][4379] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:36.737242 containerd[1604]: 2024-06-25 14:57:36.720 [INFO][4379] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a", Pod:"coredns-5dd5756b68-r4gbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0ad7032e40", MAC:"1a:e1:89:02:d9:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:36.737242 containerd[1604]: 2024-06-25 14:57:36.733 [INFO][4379] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a" Namespace="kube-system" Pod="coredns-5dd5756b68-r4gbf" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:57:36.747702 containerd[1604]: time="2024-06-25T14:57:36.746963494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:57:36.747702 containerd[1604]: time="2024-06-25T14:57:36.747015734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:36.747702 containerd[1604]: time="2024-06-25T14:57:36.747036454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:57:36.747702 containerd[1604]: time="2024-06-25T14:57:36.747053015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:36.761000 audit[4447]: NETFILTER_CFG table=filter:105 family=2 entries=30 op=nft_register_chain pid=4447 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:36.761000 audit[4447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17032 a0=3 a1=fffffb7512e0 a2=0 a3=ffff8e3b8fa8 items=0 ppid=4113 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:36.800585 kernel: audit: type=1325 audit(1719327456.761:303): table=filter:105 family=2 entries=30 op=nft_register_chain pid=4447 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:36.800703 kernel: audit: type=1300 audit(1719327456.761:303): arch=c00000b7 syscall=211 success=yes exit=17032 a0=3 a1=fffffb7512e0 a2=0 a3=ffff8e3b8fa8 items=0 ppid=4113 pid=4447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:36.761000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:36.815463 kernel: audit: type=1327 audit(1719327456.761:303): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:36.853255 containerd[1604]: time="2024-06-25T14:57:36.853205982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-45cwv,Uid:82c03bfc-edfe-48b6-9a13-261014350513,Namespace:kube-system,Attempt:1,} returns sandbox id \"c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736\"" Jun 25 14:57:36.857278 containerd[1604]: time="2024-06-25T14:57:36.855612209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:57:36.857278 containerd[1604]: time="2024-06-25T14:57:36.855724571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:36.857278 containerd[1604]: time="2024-06-25T14:57:36.855753731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:57:36.857278 containerd[1604]: time="2024-06-25T14:57:36.855794812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:36.858129 containerd[1604]: time="2024-06-25T14:57:36.857610712Z" level=info msg="CreateContainer within sandbox \"c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:57:36.895623 containerd[1604]: time="2024-06-25T14:57:36.895567824Z" level=info msg="CreateContainer within sandbox \"c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fc098bd6255beb1087a08b0c5317c3efd0689a8e59d7aac83ea71dda89d08e5\"" Jun 25 14:57:36.896571 containerd[1604]: time="2024-06-25T14:57:36.896509915Z" level=info msg="StartContainer for \"9fc098bd6255beb1087a08b0c5317c3efd0689a8e59d7aac83ea71dda89d08e5\"" Jun 25 14:57:36.904617 containerd[1604]: time="2024-06-25T14:57:36.904521726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r4gbf,Uid:2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae,Namespace:kube-system,Attempt:1,} returns sandbox id \"a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a\"" Jun 25 14:57:36.907954 containerd[1604]: time="2024-06-25T14:57:36.907921684Z" level=info msg="CreateContainer within sandbox \"a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:57:36.944209 containerd[1604]: time="2024-06-25T14:57:36.944167537Z" level=info msg="CreateContainer within sandbox \"a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cc0063ee6c5f1cfbbb4c4b9b9ec4f3db9fe1bc3ab2fd28dec3351d49de45a5a\"" Jun 25 14:57:36.946728 containerd[1604]: time="2024-06-25T14:57:36.946698246Z" level=info msg="StartContainer for \"2cc0063ee6c5f1cfbbb4c4b9b9ec4f3db9fe1bc3ab2fd28dec3351d49de45a5a\"" Jun 25 14:57:36.966554 containerd[1604]: time="2024-06-25T14:57:36.966514751Z" level=info msg="StartContainer for \"9fc098bd6255beb1087a08b0c5317c3efd0689a8e59d7aac83ea71dda89d08e5\" returns successfully" Jun 25 14:57:37.010433 containerd[1604]: time="2024-06-25T14:57:37.010386689Z" level=info msg="StartContainer for \"2cc0063ee6c5f1cfbbb4c4b9b9ec4f3db9fe1bc3ab2fd28dec3351d49de45a5a\" returns successfully" Jun 25 14:57:37.762623 kubelet[3039]: I0625 14:57:37.762581 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-r4gbf" podStartSLOduration=60.762541807 podCreationTimestamp="2024-06-25 14:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:57:37.750396749 +0000 UTC m=+73.361587586" watchObservedRunningTime="2024-06-25 14:57:37.762541807 +0000 UTC m=+73.373732684" Jun 25 14:57:37.767000 audit[4594]: NETFILTER_CFG table=filter:106 family=2 entries=14 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:37.767000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff6a2a220 a2=0 a3=1 items=0 ppid=3220 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:37.807239 kernel: audit: type=1325 audit(1719327457.767:304): table=filter:106 family=2 entries=14 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:37.807420 kernel: audit: type=1300 audit(1719327457.767:304): arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff6a2a220 a2=0 a3=1 items=0 ppid=3220 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:37.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:37.820446 kernel: audit: type=1327 audit(1719327457.767:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:37.767000 audit[4594]: NETFILTER_CFG table=nat:107 family=2 entries=14 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:37.833509 kernel: audit: type=1325 audit(1719327457.767:305): table=nat:107 family=2 entries=14 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:37.767000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff6a2a220 a2=0 a3=1 items=0 ppid=3220 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:37.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:37.843497 systemd-networkd[1299]: calid7b03d1bb80: Gained IPv6LL Jun 25 14:57:37.843738 systemd-networkd[1299]: calie0ad7032e40: Gained IPv6LL Jun 25 14:57:37.868000 audit[4596]: NETFILTER_CFG table=filter:108 family=2 entries=11 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:37.868000 audit[4596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffdc1d4f30 a2=0 a3=1 items=0 ppid=3220 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:37.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:37.870000 audit[4596]: NETFILTER_CFG table=nat:109 family=2 entries=35 op=nft_register_chain pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:37.870000 audit[4596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdc1d4f30 a2=0 a3=1 items=0 ppid=3220 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:37.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:38.885000 audit[4599]: NETFILTER_CFG table=filter:110 family=2 entries=8 op=nft_register_rule pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:38.885000 audit[4599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=fffffb8831a0 a2=0 a3=1 items=0 ppid=3220 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:38.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:38.889000 audit[4599]: NETFILTER_CFG table=nat:111 family=2 entries=56 op=nft_register_chain pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:38.889000 audit[4599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffffb8831a0 a2=0 a3=1 items=0 ppid=3220 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:38.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:40.509330 containerd[1604]: time="2024-06-25T14:57:40.507602954Z" level=info msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\"" Jun 25 14:57:40.551704 kubelet[3039]: I0625 14:57:40.551440 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-45cwv" podStartSLOduration=63.551398164 podCreationTimestamp="2024-06-25 14:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:57:37.874224312 +0000 UTC m=+73.485415149" watchObservedRunningTime="2024-06-25 14:57:40.551398164 +0000 UTC m=+76.162589041" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.552 [INFO][4622] k8s.go 608: Cleaning up netns ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.552 [INFO][4622] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" iface="eth0" netns="/var/run/netns/cni-e61c4ba4-145d-85af-8a88-9f7aec5263ad" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.552 [INFO][4622] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" iface="eth0" netns="/var/run/netns/cni-e61c4ba4-145d-85af-8a88-9f7aec5263ad" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.553 [INFO][4622] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" iface="eth0" netns="/var/run/netns/cni-e61c4ba4-145d-85af-8a88-9f7aec5263ad" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.553 [INFO][4622] k8s.go 615: Releasing IP address(es) ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.553 [INFO][4622] utils.go 188: Calico CNI releasing IP address ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.574 [INFO][4629] ipam_plugin.go 411: Releasing address using handleID ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.574 [INFO][4629] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.574 [INFO][4629] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.582 [WARNING][4629] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.582 [INFO][4629] ipam_plugin.go 439: Releasing address using workloadID ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.583 [INFO][4629] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:40.586121 containerd[1604]: 2024-06-25 14:57:40.584 [INFO][4622] k8s.go 621: Teardown processing complete. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:57:40.586811 containerd[1604]: time="2024-06-25T14:57:40.586769199Z" level=info msg="TearDown network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" successfully" Jun 25 14:57:40.586895 containerd[1604]: time="2024-06-25T14:57:40.586878880Z" level=info msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" returns successfully" Jun 25 14:57:40.589155 systemd[1]: run-netns-cni\x2de61c4ba4\x2d145d\x2d85af\x2d8a88\x2d9f7aec5263ad.mount: Deactivated successfully. Jun 25 14:57:40.590638 containerd[1604]: time="2024-06-25T14:57:40.589648711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795866d7b8-cmg8b,Uid:a5766523-eff6-42b0-8a56-15ca01f02ba3,Namespace:calico-system,Attempt:1,}" Jun 25 14:57:41.033277 systemd-networkd[1299]: calie69df58ca20: Link UP Jun 25 14:57:41.044346 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:57:41.044461 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie69df58ca20: link becomes ready Jun 25 14:57:41.046058 systemd-networkd[1299]: calie69df58ca20: Gained carrier Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:40.973 [INFO][4639] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0 calico-kube-controllers-795866d7b8- calico-system a5766523-eff6-42b0-8a56-15ca01f02ba3 789 0 2024-06-25 14:56:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:795866d7b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3815.2.4-a-2c7c8223bb calico-kube-controllers-795866d7b8-cmg8b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie69df58ca20 [] []}} ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:40.973 [INFO][4639] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:40.996 [INFO][4649] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" HandleID="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.006 [INFO][4649] ipam_plugin.go 264: Auto assigning IP ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" HandleID="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030e040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-2c7c8223bb", "pod":"calico-kube-controllers-795866d7b8-cmg8b", "timestamp":"2024-06-25 14:57:40.996689902 +0000 UTC"}, Hostname:"ci-3815.2.4-a-2c7c8223bb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.006 [INFO][4649] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.006 [INFO][4649] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.007 [INFO][4649] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-2c7c8223bb' Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.008 [INFO][4649] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.011 [INFO][4649] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.015 [INFO][4649] ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.017 [INFO][4649] ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.019 [INFO][4649] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.019 [INFO][4649] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.020 [INFO][4649] ipam.go 1685: Creating new handle: k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98 Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.023 [INFO][4649] ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.028 [INFO][4649] ipam.go 1216: Successfully claimed IPs: [192.168.97.195/26] block=192.168.97.192/26 handle="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.029 [INFO][4649] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.195/26] handle="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.029 [INFO][4649] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:41.057021 containerd[1604]: 2024-06-25 14:57:41.029 [INFO][4649] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.97.195/26] IPv6=[] ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" HandleID="k8s-pod-network.00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:41.057659 containerd[1604]: 2024-06-25 14:57:41.031 [INFO][4639] k8s.go 386: Populated endpoint ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0", GenerateName:"calico-kube-controllers-795866d7b8-", Namespace:"calico-system", SelfLink:"", UID:"a5766523-eff6-42b0-8a56-15ca01f02ba3", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795866d7b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"", Pod:"calico-kube-controllers-795866d7b8-cmg8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie69df58ca20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:41.057659 containerd[1604]: 2024-06-25 14:57:41.031 [INFO][4639] k8s.go 387: Calico CNI using IPs: [192.168.97.195/32] ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:41.057659 containerd[1604]: 2024-06-25 14:57:41.031 [INFO][4639] dataplane_linux.go 68: Setting the host side veth name to calie69df58ca20 ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:41.057659 containerd[1604]: 2024-06-25 14:57:41.046 [INFO][4639] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:41.057659 containerd[1604]: 2024-06-25 14:57:41.047 [INFO][4639] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0", GenerateName:"calico-kube-controllers-795866d7b8-", Namespace:"calico-system", SelfLink:"", UID:"a5766523-eff6-42b0-8a56-15ca01f02ba3", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795866d7b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98", Pod:"calico-kube-controllers-795866d7b8-cmg8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie69df58ca20", MAC:"ce:7d:8a:07:ea:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:41.057659 containerd[1604]: 2024-06-25 14:57:41.054 [INFO][4639] k8s.go 500: Wrote updated endpoint to datastore ContainerID="00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98" Namespace="calico-system" Pod="calico-kube-controllers-795866d7b8-cmg8b" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:57:41.068000 audit[4667]: NETFILTER_CFG table=filter:112 family=2 entries=48 op=nft_register_chain pid=4667 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:41.068000 audit[4667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24376 a0=3 a1=ffffc641f2a0 a2=0 a3=ffff9551efa8 items=0 ppid=4113 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:41.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:41.105571 containerd[1604]: time="2024-06-25T14:57:41.105496434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:57:41.105702 containerd[1604]: time="2024-06-25T14:57:41.105543315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:41.105702 containerd[1604]: time="2024-06-25T14:57:41.105587835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:57:41.105702 containerd[1604]: time="2024-06-25T14:57:41.105598836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:41.150159 containerd[1604]: time="2024-06-25T14:57:41.150120371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-795866d7b8-cmg8b,Uid:a5766523-eff6-42b0-8a56-15ca01f02ba3,Namespace:calico-system,Attempt:1,} returns sandbox id \"00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98\"" Jun 25 14:57:41.151831 containerd[1604]: time="2024-06-25T14:57:41.151805110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:57:41.507399 containerd[1604]: time="2024-06-25T14:57:41.507359870Z" level=info msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\"" Jun 25 14:57:41.589264 systemd[1]: run-containerd-runc-k8s.io-00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98-runc.ur20Ow.mount: Deactivated successfully. Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.550 [INFO][4726] k8s.go 608: Cleaning up netns ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.550 [INFO][4726] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" iface="eth0" netns="/var/run/netns/cni-dacdadc3-eaf8-a501-48d1-fd5876e1ef84" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.550 [INFO][4726] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" iface="eth0" netns="/var/run/netns/cni-dacdadc3-eaf8-a501-48d1-fd5876e1ef84" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.550 [INFO][4726] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" iface="eth0" netns="/var/run/netns/cni-dacdadc3-eaf8-a501-48d1-fd5876e1ef84" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.550 [INFO][4726] k8s.go 615: Releasing IP address(es) ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.550 [INFO][4726] utils.go 188: Calico CNI releasing IP address ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.575 [INFO][4732] ipam_plugin.go 411: Releasing address using handleID ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.576 [INFO][4732] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.576 [INFO][4732] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.584 [WARNING][4732] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.584 [INFO][4732] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.585 [INFO][4732] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:41.594060 containerd[1604]: 2024-06-25 14:57:41.589 [INFO][4726] k8s.go 621: Teardown processing complete. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:57:41.595592 containerd[1604]: time="2024-06-25T14:57:41.594894125Z" level=info msg="TearDown network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" successfully" Jun 25 14:57:41.595592 containerd[1604]: time="2024-06-25T14:57:41.594933445Z" level=info msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" returns successfully" Jun 25 14:57:41.594073 systemd[1]: run-netns-cni\x2ddacdadc3\x2deaf8\x2da501\x2d48d1\x2dfd5876e1ef84.mount: Deactivated successfully. Jun 25 14:57:41.596092 containerd[1604]: time="2024-06-25T14:57:41.596066778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rz48,Uid:7813833d-a993-419c-9c27-7d6c8ce9f5ba,Namespace:calico-system,Attempt:1,}" Jun 25 14:57:42.237124 systemd-networkd[1299]: calib54124dd017: Link UP Jun 25 14:57:42.250128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:57:42.250231 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib54124dd017: link becomes ready Jun 25 14:57:42.251575 systemd-networkd[1299]: calib54124dd017: Gained carrier Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.170 [INFO][4740] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0 csi-node-driver- calico-system 7813833d-a993-419c-9c27-7d6c8ce9f5ba 798 0 2024-06-25 14:56:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3815.2.4-a-2c7c8223bb csi-node-driver-6rz48 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib54124dd017 [] []}} ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.171 [INFO][4740] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.195 [INFO][4751] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" HandleID="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.206 [INFO][4751] ipam_plugin.go 264: Auto assigning IP ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" HandleID="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3815.2.4-a-2c7c8223bb", "pod":"csi-node-driver-6rz48", "timestamp":"2024-06-25 14:57:42.19570096 +0000 UTC"}, Hostname:"ci-3815.2.4-a-2c7c8223bb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.206 [INFO][4751] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.206 [INFO][4751] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.206 [INFO][4751] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-2c7c8223bb' Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.207 [INFO][4751] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.212 [INFO][4751] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.216 [INFO][4751] ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.218 [INFO][4751] ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.220 [INFO][4751] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.220 [INFO][4751] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.222 [INFO][4751] ipam.go 1685: Creating new handle: k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.226 [INFO][4751] ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.231 [INFO][4751] ipam.go 1216: Successfully claimed IPs: [192.168.97.196/26] block=192.168.97.192/26 handle="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.231 [INFO][4751] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.196/26] handle="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.231 [INFO][4751] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:42.263270 containerd[1604]: 2024-06-25 14:57:42.231 [INFO][4751] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.97.196/26] IPv6=[] ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" HandleID="k8s-pod-network.6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:42.263844 containerd[1604]: 2024-06-25 14:57:42.233 [INFO][4740] k8s.go 386: Populated endpoint ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7813833d-a993-419c-9c27-7d6c8ce9f5ba", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"", Pod:"csi-node-driver-6rz48", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib54124dd017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:42.263844 containerd[1604]: 2024-06-25 14:57:42.233 [INFO][4740] k8s.go 387: Calico CNI using IPs: [192.168.97.196/32] ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:42.263844 containerd[1604]: 2024-06-25 14:57:42.233 [INFO][4740] dataplane_linux.go 68: Setting the host side veth name to calib54124dd017 ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:42.263844 containerd[1604]: 2024-06-25 14:57:42.251 [INFO][4740] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:42.263844 containerd[1604]: 2024-06-25 14:57:42.251 [INFO][4740] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7813833d-a993-419c-9c27-7d6c8ce9f5ba", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a", Pod:"csi-node-driver-6rz48", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib54124dd017", MAC:"f6:7c:e1:07:e8:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:42.263844 containerd[1604]: 2024-06-25 14:57:42.261 [INFO][4740] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a" Namespace="calico-system" Pod="csi-node-driver-6rz48" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:57:42.277000 audit[4771]: NETFILTER_CFG table=filter:113 family=2 entries=38 op=nft_register_chain pid=4771 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:42.284096 kernel: kauditd_printk_skb: 17 callbacks suppressed Jun 25 14:57:42.284184 kernel: audit: type=1325 audit(1719327462.277:311): table=filter:113 family=2 entries=38 op=nft_register_chain pid=4771 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:42.277000 audit[4771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19812 a0=3 a1=ffffc53f9d50 a2=0 a3=ffff98b7cfa8 items=0 ppid=4113 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:42.323211 kernel: audit: type=1300 audit(1719327462.277:311): arch=c00000b7 syscall=211 success=yes exit=19812 a0=3 a1=ffffc53f9d50 a2=0 a3=ffff98b7cfa8 items=0 ppid=4113 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:42.277000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:42.337709 kernel: audit: type=1327 audit(1719327462.277:311): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:42.461271 containerd[1604]: time="2024-06-25T14:57:42.461016509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:57:42.461271 containerd[1604]: time="2024-06-25T14:57:42.461073468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:42.461271 containerd[1604]: time="2024-06-25T14:57:42.461091908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:57:42.461271 containerd[1604]: time="2024-06-25T14:57:42.461104628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:42.529794 containerd[1604]: time="2024-06-25T14:57:42.529748380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6rz48,Uid:7813833d-a993-419c-9c27-7d6c8ce9f5ba,Namespace:calico-system,Attempt:1,} returns sandbox id \"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a\"" Jun 25 14:57:42.615986 systemd[1]: run-containerd-runc-k8s.io-d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203-runc.IlngmZ.mount: Deactivated successfully. Jun 25 14:57:43.027466 systemd-networkd[1299]: calie69df58ca20: Gained IPv6LL Jun 25 14:57:43.987468 systemd-networkd[1299]: calib54124dd017: Gained IPv6LL Jun 25 14:57:50.193926 containerd[1604]: time="2024-06-25T14:57:50.193858100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:50.194344 containerd[1604]: time="2024-06-25T14:57:50.189163689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:57:50.250953 containerd[1604]: time="2024-06-25T14:57:50.250900369Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:50.297072 containerd[1604]: time="2024-06-25T14:57:50.297017384Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:50.343391 containerd[1604]: time="2024-06-25T14:57:50.343343251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:50.344266 containerd[1604]: time="2024-06-25T14:57:50.344231451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 9.192268741s" Jun 25 14:57:50.344364 containerd[1604]: time="2024-06-25T14:57:50.344267567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:57:50.345168 containerd[1604]: time="2024-06-25T14:57:50.345138090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:57:50.365539 containerd[1604]: time="2024-06-25T14:57:50.365487201Z" level=info msg="CreateContainer within sandbox \"00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:57:50.599905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872191803.mount: Deactivated successfully. Jun 25 14:57:50.849861 containerd[1604]: time="2024-06-25T14:57:50.849817721Z" level=info msg="CreateContainer within sandbox \"00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c\"" Jun 25 14:57:50.852410 containerd[1604]: time="2024-06-25T14:57:50.852317186Z" level=info msg="StartContainer for \"71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c\"" Jun 25 14:57:50.919565 containerd[1604]: time="2024-06-25T14:57:50.919525612Z" level=info msg="StartContainer for \"71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c\" returns successfully" Jun 25 14:57:51.786962 kubelet[3039]: I0625 14:57:51.786349 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-795866d7b8-cmg8b" podStartSLOduration=51.593007134 podCreationTimestamp="2024-06-25 14:56:51 +0000 UTC" firstStartedPulling="2024-06-25 14:57:41.151331105 +0000 UTC m=+76.762521982" lastFinishedPulling="2024-06-25 14:57:50.344600602 +0000 UTC m=+85.955791559" observedRunningTime="2024-06-25 14:57:51.786109493 +0000 UTC m=+87.397300370" watchObservedRunningTime="2024-06-25 14:57:51.786276711 +0000 UTC m=+87.397467588" Jun 25 14:57:51.799984 systemd[1]: run-containerd-runc-k8s.io-71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c-runc.nG8b8F.mount: Deactivated successfully. Jun 25 14:57:52.740971 containerd[1604]: time="2024-06-25T14:57:52.740931711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:52.788461 containerd[1604]: time="2024-06-25T14:57:52.788413968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:57:52.792003 containerd[1604]: time="2024-06-25T14:57:52.791974705Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:52.838653 containerd[1604]: time="2024-06-25T14:57:52.838597154Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:52.898096 containerd[1604]: time="2024-06-25T14:57:52.898042334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:52.899124 containerd[1604]: time="2024-06-25T14:57:52.899083878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 2.553790569s" Jun 25 14:57:52.899202 containerd[1604]: time="2024-06-25T14:57:52.899130592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:57:52.901246 containerd[1604]: time="2024-06-25T14:57:52.901216001Z" level=info msg="CreateContainer within sandbox \"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:57:53.283630 containerd[1604]: time="2024-06-25T14:57:53.283569277Z" level=info msg="CreateContainer within sandbox \"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b6eb104e1e0885fdbc4f6fe84b1349505799ce459e4a6a4e04efddd3a3433708\"" Jun 25 14:57:53.284687 containerd[1604]: time="2024-06-25T14:57:53.284647379Z" level=info msg="StartContainer for \"b6eb104e1e0885fdbc4f6fe84b1349505799ce459e4a6a4e04efddd3a3433708\"" Jun 25 14:57:53.316728 systemd[1]: run-containerd-runc-k8s.io-b6eb104e1e0885fdbc4f6fe84b1349505799ce459e4a6a4e04efddd3a3433708-runc.Z5yHZi.mount: Deactivated successfully. Jun 25 14:57:53.356464 containerd[1604]: time="2024-06-25T14:57:53.356418731Z" level=info msg="StartContainer for \"b6eb104e1e0885fdbc4f6fe84b1349505799ce459e4a6a4e04efddd3a3433708\" returns successfully" Jun 25 14:57:53.359108 containerd[1604]: time="2024-06-25T14:57:53.359082069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:57:55.125486 containerd[1604]: time="2024-06-25T14:57:55.125445762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:55.131742 containerd[1604]: time="2024-06-25T14:57:55.131703062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:57:55.136592 containerd[1604]: time="2024-06-25T14:57:55.136564417Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:55.141481 containerd[1604]: time="2024-06-25T14:57:55.141454368Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:55.147658 containerd[1604]: time="2024-06-25T14:57:55.147622359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:55.148396 containerd[1604]: time="2024-06-25T14:57:55.148365507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.789142935s" Jun 25 14:57:55.148503 containerd[1604]: time="2024-06-25T14:57:55.148483252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:57:55.161137 containerd[1604]: time="2024-06-25T14:57:55.161108359Z" level=info msg="CreateContainer within sandbox \"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:57:55.193688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2417731604.mount: Deactivated successfully. Jun 25 14:57:55.208854 containerd[1604]: time="2024-06-25T14:57:55.208808578Z" level=info msg="CreateContainer within sandbox \"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"77f535164fd91977e73420a1a88cb6e75fd3e20aa97293b5a44a9bc760959149\"" Jun 25 14:57:55.213019 containerd[1604]: time="2024-06-25T14:57:55.212977419Z" level=info msg="StartContainer for \"77f535164fd91977e73420a1a88cb6e75fd3e20aa97293b5a44a9bc760959149\"" Jun 25 14:57:55.254577 systemd[1]: run-containerd-runc-k8s.io-77f535164fd91977e73420a1a88cb6e75fd3e20aa97293b5a44a9bc760959149-runc.xB7Khz.mount: Deactivated successfully. Jun 25 14:57:55.329050 kubelet[3039]: I0625 14:57:55.329019 3039 topology_manager.go:215] "Topology Admit Handler" podUID="f3581d28-3e59-474c-b741-a781d6c1bdf6" podNamespace="calico-apiserver" podName="calico-apiserver-699d4ff69b-nngr6" Jun 25 14:57:55.342659 kubelet[3039]: I0625 14:57:55.342627 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3581d28-3e59-474c-b741-a781d6c1bdf6-calico-apiserver-certs\") pod \"calico-apiserver-699d4ff69b-nngr6\" (UID: \"f3581d28-3e59-474c-b741-a781d6c1bdf6\") " pod="calico-apiserver/calico-apiserver-699d4ff69b-nngr6" Jun 25 14:57:55.342863 kubelet[3039]: I0625 14:57:55.342850 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4gf\" (UniqueName: \"kubernetes.io/projected/f3581d28-3e59-474c-b741-a781d6c1bdf6-kube-api-access-kk4gf\") pod \"calico-apiserver-699d4ff69b-nngr6\" (UID: \"f3581d28-3e59-474c-b741-a781d6c1bdf6\") " pod="calico-apiserver/calico-apiserver-699d4ff69b-nngr6" Jun 25 14:57:55.358621 kubelet[3039]: I0625 14:57:55.358583 3039 topology_manager.go:215] "Topology Admit Handler" podUID="ceb52782-9650-4932-9560-479c9be7b726" podNamespace="calico-apiserver" podName="calico-apiserver-699d4ff69b-2br2l" Jun 25 14:57:55.428169 containerd[1604]: time="2024-06-25T14:57:55.428072906Z" level=info msg="StartContainer for \"77f535164fd91977e73420a1a88cb6e75fd3e20aa97293b5a44a9bc760959149\" returns successfully" Jun 25 14:57:55.436000 audit[5002]: NETFILTER_CFG table=filter:114 family=2 entries=8 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.444024 kubelet[3039]: I0625 14:57:55.443995 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkx6d\" (UniqueName: \"kubernetes.io/projected/ceb52782-9650-4932-9560-479c9be7b726-kube-api-access-kkx6d\") pod \"calico-apiserver-699d4ff69b-2br2l\" (UID: \"ceb52782-9650-4932-9560-479c9be7b726\") " pod="calico-apiserver/calico-apiserver-699d4ff69b-2br2l" Jun 25 14:57:55.444192 kubelet[3039]: I0625 14:57:55.444181 3039 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ceb52782-9650-4932-9560-479c9be7b726-calico-apiserver-certs\") pod \"calico-apiserver-699d4ff69b-2br2l\" (UID: \"ceb52782-9650-4932-9560-479c9be7b726\") " pod="calico-apiserver/calico-apiserver-699d4ff69b-2br2l" Jun 25 14:57:55.436000 audit[5002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffecc1f6a0 a2=0 a3=1 items=0 ppid=3220 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:55.453429 kubelet[3039]: E0625 14:57:55.453403 3039 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:57:55.453654 kubelet[3039]: E0625 14:57:55.453642 3039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3581d28-3e59-474c-b741-a781d6c1bdf6-calico-apiserver-certs podName:f3581d28-3e59-474c-b741-a781d6c1bdf6 nodeName:}" failed. No retries permitted until 2024-06-25 14:57:55.953602326 +0000 UTC m=+91.564793203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f3581d28-3e59-474c-b741-a781d6c1bdf6-calico-apiserver-certs") pod "calico-apiserver-699d4ff69b-nngr6" (UID: "f3581d28-3e59-474c-b741-a781d6c1bdf6") : secret "calico-apiserver-certs" not found Jun 25 14:57:55.474555 kernel: audit: type=1325 audit(1719327475.436:312): table=filter:114 family=2 entries=8 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.474704 kernel: audit: type=1300 audit(1719327475.436:312): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffecc1f6a0 a2=0 a3=1 items=0 ppid=3220 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:55.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:55.487539 kernel: audit: type=1327 audit(1719327475.436:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:55.436000 audit[5002]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.500342 kernel: audit: type=1325 audit(1719327475.436:313): table=nat:115 family=2 entries=20 op=nft_register_rule pid=5002 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.436000 audit[5002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffecc1f6a0 a2=0 a3=1 items=0 ppid=3220 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:55.526899 kernel: audit: type=1300 audit(1719327475.436:313): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffecc1f6a0 a2=0 a3=1 items=0 ppid=3220 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:55.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:55.544966 kernel: audit: type=1327 audit(1719327475.436:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:55.546532 kubelet[3039]: E0625 14:57:55.546499 3039 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:57:55.546651 kubelet[3039]: E0625 14:57:55.546560 3039 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ceb52782-9650-4932-9560-479c9be7b726-calico-apiserver-certs podName:ceb52782-9650-4932-9560-479c9be7b726 nodeName:}" failed. No retries permitted until 2024-06-25 14:57:56.046545709 +0000 UTC m=+91.657736586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ceb52782-9650-4932-9560-479c9be7b726-calico-apiserver-certs") pod "calico-apiserver-699d4ff69b-2br2l" (UID: "ceb52782-9650-4932-9560-479c9be7b726") : secret "calico-apiserver-certs" not found Jun 25 14:57:55.552000 audit[5009]: NETFILTER_CFG table=filter:116 family=2 entries=9 op=nft_register_rule pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.552000 audit[5009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff56c5110 a2=0 a3=1 items=0 ppid=3220 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:55.591491 kernel: audit: type=1325 audit(1719327475.552:314): table=filter:116 family=2 entries=9 op=nft_register_rule pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.591646 kernel: audit: type=1300 audit(1719327475.552:314): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffff56c5110 a2=0 a3=1 items=0 ppid=3220 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:55.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:55.604528 kernel: audit: type=1327 audit(1719327475.552:314): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:55.562000 audit[5009]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.617314 kernel: audit: type=1325 audit(1719327475.562:315): table=nat:117 family=2 entries=20 op=nft_register_rule pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:55.562000 audit[5009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff56c5110 a2=0 a3=1 items=0 ppid=3220 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:55.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:55.683589 kubelet[3039]: I0625 14:57:55.683458 3039 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:57:55.683771 kubelet[3039]: I0625 14:57:55.683758 3039 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:57:55.792142 kubelet[3039]: I0625 14:57:55.792103 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-6rz48" podStartSLOduration=52.174738037 podCreationTimestamp="2024-06-25 14:56:51 +0000 UTC" firstStartedPulling="2024-06-25 14:57:42.531420209 +0000 UTC m=+78.142611086" lastFinishedPulling="2024-06-25 14:57:55.148747739 +0000 UTC m=+90.759938616" observedRunningTime="2024-06-25 14:57:55.791626102 +0000 UTC m=+91.402816979" watchObservedRunningTime="2024-06-25 14:57:55.792065567 +0000 UTC m=+91.403256444" Jun 25 14:57:56.242812 containerd[1604]: time="2024-06-25T14:57:56.242698317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699d4ff69b-nngr6,Uid:f3581d28-3e59-474c-b741-a781d6c1bdf6,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:57:56.275504 containerd[1604]: time="2024-06-25T14:57:56.275459696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699d4ff69b-2br2l,Uid:ceb52782-9650-4932-9560-479c9be7b726,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:57:56.468594 systemd-networkd[1299]: cali601b4e7bb8e: Link UP Jun 25 14:57:56.480807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:57:56.480939 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali601b4e7bb8e: link becomes ready Jun 25 14:57:56.481650 systemd-networkd[1299]: cali601b4e7bb8e: Gained carrier Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.355 [INFO][5013] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0 calico-apiserver-699d4ff69b- calico-apiserver f3581d28-3e59-474c-b741-a781d6c1bdf6 894 0 2024-06-25 14:57:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699d4ff69b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-2c7c8223bb calico-apiserver-699d4ff69b-nngr6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali601b4e7bb8e [] []}} ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.355 [INFO][5013] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.403 [INFO][5036] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" HandleID="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.419 [INFO][5036] ipam_plugin.go 264: Auto assigning IP ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" HandleID="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003053e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-2c7c8223bb", "pod":"calico-apiserver-699d4ff69b-nngr6", "timestamp":"2024-06-25 14:57:56.403598969 +0000 UTC"}, Hostname:"ci-3815.2.4-a-2c7c8223bb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.419 [INFO][5036] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.419 [INFO][5036] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.419 [INFO][5036] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-2c7c8223bb' Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.427 [INFO][5036] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.435 [INFO][5036] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.444 [INFO][5036] ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.446 [INFO][5036] ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.448 [INFO][5036] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.448 [INFO][5036] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.450 [INFO][5036] ipam.go 1685: Creating new handle: k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.456 [INFO][5036] ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.464 [INFO][5036] ipam.go 1216: Successfully claimed IPs: [192.168.97.197/26] block=192.168.97.192/26 handle="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.464 [INFO][5036] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.197/26] handle="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.464 [INFO][5036] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:56.499125 containerd[1604]: 2024-06-25 14:57:56.464 [INFO][5036] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.97.197/26] IPv6=[] ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" HandleID="k8s-pod-network.f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" Jun 25 14:57:56.499893 containerd[1604]: 2024-06-25 14:57:56.466 [INFO][5013] k8s.go 386: Populated endpoint ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0", GenerateName:"calico-apiserver-699d4ff69b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3581d28-3e59-474c-b741-a781d6c1bdf6", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699d4ff69b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"", Pod:"calico-apiserver-699d4ff69b-nngr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali601b4e7bb8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:56.499893 containerd[1604]: 2024-06-25 14:57:56.466 [INFO][5013] k8s.go 387: Calico CNI using IPs: [192.168.97.197/32] ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" Jun 25 14:57:56.499893 containerd[1604]: 2024-06-25 14:57:56.466 [INFO][5013] dataplane_linux.go 68: Setting the host side veth name to cali601b4e7bb8e ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" Jun 25 14:57:56.499893 containerd[1604]: 2024-06-25 14:57:56.482 [INFO][5013] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" Jun 25 14:57:56.499893 containerd[1604]: 2024-06-25 14:57:56.482 [INFO][5013] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0", GenerateName:"calico-apiserver-699d4ff69b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3581d28-3e59-474c-b741-a781d6c1bdf6", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699d4ff69b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a", Pod:"calico-apiserver-699d4ff69b-nngr6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali601b4e7bb8e", MAC:"ce:9d:d3:af:cc:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:56.499893 containerd[1604]: 2024-06-25 14:57:56.489 [INFO][5013] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-nngr6" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--nngr6-eth0" Jun 25 14:57:56.532552 systemd-networkd[1299]: cali936544831b1: Link UP Jun 25 14:57:56.541438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali936544831b1: link becomes ready Jun 25 14:57:56.541759 systemd-networkd[1299]: cali936544831b1: Gained carrier Jun 25 14:57:56.552000 audit[5077]: NETFILTER_CFG table=filter:118 family=2 entries=51 op=nft_register_chain pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:56.552000 audit[5077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26260 a0=3 a1=fffffb3e82d0 a2=0 a3=ffffbdff4fa8 items=0 ppid=4113 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:56.552000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:56.555253 containerd[1604]: time="2024-06-25T14:57:56.551783261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:57:56.555253 containerd[1604]: time="2024-06-25T14:57:56.551839894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:56.555253 containerd[1604]: time="2024-06-25T14:57:56.551853572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:57:56.555253 containerd[1604]: time="2024-06-25T14:57:56.551862651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.394 [INFO][5026] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0 calico-apiserver-699d4ff69b- calico-apiserver ceb52782-9650-4932-9560-479c9be7b726 895 0 2024-06-25 14:57:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699d4ff69b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3815.2.4-a-2c7c8223bb calico-apiserver-699d4ff69b-2br2l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali936544831b1 [] []}} ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.394 [INFO][5026] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.433 [INFO][5044] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" HandleID="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.450 [INFO][5044] ipam_plugin.go 264: Auto assigning IP ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" HandleID="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3815.2.4-a-2c7c8223bb", "pod":"calico-apiserver-699d4ff69b-2br2l", "timestamp":"2024-06-25 14:57:56.433950363 +0000 UTC"}, Hostname:"ci-3815.2.4-a-2c7c8223bb", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.450 [INFO][5044] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.464 [INFO][5044] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.464 [INFO][5044] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3815.2.4-a-2c7c8223bb' Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.479 [INFO][5044] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.487 [INFO][5044] ipam.go 372: Looking up existing affinities for host host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.498 [INFO][5044] ipam.go 489: Trying affinity for 192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.501 [INFO][5044] ipam.go 155: Attempting to load block cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.511 [INFO][5044] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.97.192/26 host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.511 [INFO][5044] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.97.192/26 handle="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.515 [INFO][5044] ipam.go 1685: Creating new handle: k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.518 [INFO][5044] ipam.go 1203: Writing block in order to claim IPs block=192.168.97.192/26 handle="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.525 [INFO][5044] ipam.go 1216: Successfully claimed IPs: [192.168.97.198/26] block=192.168.97.192/26 handle="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.525 [INFO][5044] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.97.198/26] handle="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" host="ci-3815.2.4-a-2c7c8223bb" Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.525 [INFO][5044] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:57:56.557699 containerd[1604]: 2024-06-25 14:57:56.525 [INFO][5044] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.97.198/26] IPv6=[] ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" HandleID="k8s-pod-network.3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" Jun 25 14:57:56.558253 containerd[1604]: 2024-06-25 14:57:56.528 [INFO][5026] k8s.go 386: Populated endpoint ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0", GenerateName:"calico-apiserver-699d4ff69b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ceb52782-9650-4932-9560-479c9be7b726", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699d4ff69b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"", Pod:"calico-apiserver-699d4ff69b-2br2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali936544831b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:56.558253 containerd[1604]: 2024-06-25 14:57:56.528 [INFO][5026] k8s.go 387: Calico CNI using IPs: [192.168.97.198/32] ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" Jun 25 14:57:56.558253 containerd[1604]: 2024-06-25 14:57:56.528 [INFO][5026] dataplane_linux.go 68: Setting the host side veth name to cali936544831b1 ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" Jun 25 14:57:56.558253 containerd[1604]: 2024-06-25 14:57:56.542 [INFO][5026] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" Jun 25 14:57:56.558253 containerd[1604]: 2024-06-25 14:57:56.543 [INFO][5026] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0", GenerateName:"calico-apiserver-699d4ff69b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ceb52782-9650-4932-9560-479c9be7b726", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 57, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699d4ff69b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d", Pod:"calico-apiserver-699d4ff69b-2br2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali936544831b1", MAC:"4e:79:56:57:6d:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:57:56.558253 containerd[1604]: 2024-06-25 14:57:56.556 [INFO][5026] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d" Namespace="calico-apiserver" Pod="calico-apiserver-699d4ff69b-2br2l" WorkloadEndpoint="ci--3815.2.4--a--2c7c8223bb-k8s-calico--apiserver--699d4ff69b--2br2l-eth0" Jun 25 14:57:56.573000 audit[5109]: NETFILTER_CFG table=filter:119 family=2 entries=45 op=nft_register_chain pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:57:56.573000 audit[5109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23096 a0=3 a1=ffffeed2e8b0 a2=0 a3=ffffba784fa8 items=0 ppid=4113 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:56.573000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:57:56.593115 containerd[1604]: time="2024-06-25T14:57:56.589960375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:57:56.593115 containerd[1604]: time="2024-06-25T14:57:56.590010849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:56.593115 containerd[1604]: time="2024-06-25T14:57:56.590023967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:57:56.593115 containerd[1604]: time="2024-06-25T14:57:56.590034806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:57:56.609521 containerd[1604]: time="2024-06-25T14:57:56.609470821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699d4ff69b-nngr6,Uid:f3581d28-3e59-474c-b741-a781d6c1bdf6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a\"" Jun 25 14:57:56.611667 containerd[1604]: time="2024-06-25T14:57:56.611636035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:57:56.639299 containerd[1604]: time="2024-06-25T14:57:56.639235807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699d4ff69b-2br2l,Uid:ceb52782-9650-4932-9560-479c9be7b726,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d\"" Jun 25 14:57:57.939458 systemd-networkd[1299]: cali601b4e7bb8e: Gained IPv6LL Jun 25 14:57:58.387386 systemd-networkd[1299]: cali936544831b1: Gained IPv6LL Jun 25 14:57:58.749583 containerd[1604]: time="2024-06-25T14:57:58.749530216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:58.752229 containerd[1604]: time="2024-06-25T14:57:58.752188779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:57:58.757532 containerd[1604]: time="2024-06-25T14:57:58.757453712Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:58.762567 containerd[1604]: time="2024-06-25T14:57:58.762540706Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:58.768848 containerd[1604]: time="2024-06-25T14:57:58.768820557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:58.770540 containerd[1604]: time="2024-06-25T14:57:58.770496118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.158716021s" Jun 25 14:57:58.770670 containerd[1604]: time="2024-06-25T14:57:58.770648420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:57:58.772074 containerd[1604]: time="2024-06-25T14:57:58.772046573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:57:58.773981 containerd[1604]: time="2024-06-25T14:57:58.773952506Z" level=info msg="CreateContainer within sandbox \"f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:57:58.809603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569813288.mount: Deactivated successfully. Jun 25 14:57:58.825612 containerd[1604]: time="2024-06-25T14:57:58.825568674Z" level=info msg="CreateContainer within sandbox \"f31904835aaa25cf085d55ba7094c7a43b0b8c8216d738c6306c8ef2217d710a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bc2225df1f91bf7a17872a613f8122b6dd7eb5c0bddce193f43f288f4b00827d\"" Jun 25 14:57:58.827340 containerd[1604]: time="2024-06-25T14:57:58.827252754Z" level=info msg="StartContainer for \"bc2225df1f91bf7a17872a613f8122b6dd7eb5c0bddce193f43f288f4b00827d\"" Jun 25 14:57:58.914386 containerd[1604]: time="2024-06-25T14:57:58.914341295Z" level=info msg="StartContainer for \"bc2225df1f91bf7a17872a613f8122b6dd7eb5c0bddce193f43f288f4b00827d\" returns successfully" Jun 25 14:57:59.087108 containerd[1604]: time="2024-06-25T14:57:59.086992828Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:59.090353 containerd[1604]: time="2024-06-25T14:57:59.090314838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 14:57:59.095651 containerd[1604]: time="2024-06-25T14:57:59.095614416Z" level=info msg="ImageUpdate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:59.100812 containerd[1604]: time="2024-06-25T14:57:59.100775210Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:59.105351 containerd[1604]: time="2024-06-25T14:57:59.105321796Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:57:59.106020 containerd[1604]: time="2024-06-25T14:57:59.105987278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 333.810001ms" Jun 25 14:57:59.106125 containerd[1604]: time="2024-06-25T14:57:59.106106344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:57:59.108015 containerd[1604]: time="2024-06-25T14:57:59.107988723Z" level=info msg="CreateContainer within sandbox \"3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:57:59.159017 containerd[1604]: time="2024-06-25T14:57:59.158955257Z" level=info msg="CreateContainer within sandbox \"3308e50b234258a346f60398ce0d77f0db75fe383c4151c4f34cf98376bf928d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b8c733149f9fe6fe6357c4f9fd412e87e74578fd2847d42dc0ad018ae2907672\"" Jun 25 14:57:59.159739 containerd[1604]: time="2024-06-25T14:57:59.159714968Z" level=info msg="StartContainer for \"b8c733149f9fe6fe6357c4f9fd412e87e74578fd2847d42dc0ad018ae2907672\"" Jun 25 14:57:59.224932 containerd[1604]: time="2024-06-25T14:57:59.224888395Z" level=info msg="StartContainer for \"b8c733149f9fe6fe6357c4f9fd412e87e74578fd2847d42dc0ad018ae2907672\" returns successfully" Jun 25 14:57:59.805920 systemd[1]: run-containerd-runc-k8s.io-bc2225df1f91bf7a17872a613f8122b6dd7eb5c0bddce193f43f288f4b00827d-runc.ED8B8M.mount: Deactivated successfully. Jun 25 14:57:59.820348 kubelet[3039]: I0625 14:57:59.820315 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699d4ff69b-2br2l" podStartSLOduration=2.354508641 podCreationTimestamp="2024-06-25 14:57:55 +0000 UTC" firstStartedPulling="2024-06-25 14:57:56.640645794 +0000 UTC m=+92.251836671" lastFinishedPulling="2024-06-25 14:57:59.10639567 +0000 UTC m=+94.717586547" observedRunningTime="2024-06-25 14:57:59.809494061 +0000 UTC m=+95.420684938" watchObservedRunningTime="2024-06-25 14:57:59.820258517 +0000 UTC m=+95.431449394" Jun 25 14:57:59.840000 audit[5251]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=5251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:59.840000 audit[5251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffef2dca60 a2=0 a3=1 items=0 ppid=3220 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:59.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:59.842000 audit[5251]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=5251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:59.842000 audit[5251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffef2dca60 a2=0 a3=1 items=0 ppid=3220 pid=5251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:59.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:59.861000 audit[5253]: NETFILTER_CFG table=filter:122 family=2 entries=10 op=nft_register_rule pid=5253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:59.861000 audit[5253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd808d610 a2=0 a3=1 items=0 ppid=3220 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:59.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:57:59.863000 audit[5253]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=5253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:57:59.863000 audit[5253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd808d610 a2=0 a3=1 items=0 ppid=3220 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:57:59.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:00.995664 kubelet[3039]: I0625 14:58:00.995630 3039 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699d4ff69b-nngr6" podStartSLOduration=3.835782304 podCreationTimestamp="2024-06-25 14:57:55 +0000 UTC" firstStartedPulling="2024-06-25 14:57:56.611337871 +0000 UTC m=+92.222528748" lastFinishedPulling="2024-06-25 14:57:58.771142881 +0000 UTC m=+94.382333718" observedRunningTime="2024-06-25 14:57:59.822521691 +0000 UTC m=+95.433712568" watchObservedRunningTime="2024-06-25 14:58:00.995587274 +0000 UTC m=+96.606778151" Jun 25 14:58:01.086000 audit[5255]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=5255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:01.091556 kernel: kauditd_printk_skb: 20 callbacks suppressed Jun 25 14:58:01.091641 kernel: audit: type=1325 audit(1719327481.086:322): table=filter:124 family=2 entries=9 op=nft_register_rule pid=5255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:01.086000 audit[5255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe088ad40 a2=0 a3=1 items=0 ppid=3220 pid=5255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:01.129187 kernel: audit: type=1300 audit(1719327481.086:322): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe088ad40 a2=0 a3=1 items=0 ppid=3220 pid=5255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:01.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:01.142036 kernel: audit: type=1327 audit(1719327481.086:322): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:01.087000 audit[5255]: NETFILTER_CFG table=nat:125 family=2 entries=27 op=nft_register_chain pid=5255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:01.155447 kernel: audit: type=1325 audit(1719327481.087:323): table=nat:125 family=2 entries=27 op=nft_register_chain pid=5255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:01.087000 audit[5255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffe088ad40 a2=0 a3=1 items=0 ppid=3220 pid=5255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:01.182974 kernel: audit: type=1300 audit(1719327481.087:323): arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffe088ad40 a2=0 a3=1 items=0 ppid=3220 pid=5255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:01.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:01.197848 kernel: audit: type=1327 audit(1719327481.087:323): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:02.138000 audit[5260]: NETFILTER_CFG table=filter:126 family=2 entries=8 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:02.138000 audit[5260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffce8e4460 a2=0 a3=1 items=0 ppid=3220 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:02.177583 kernel: audit: type=1325 audit(1719327482.138:324): table=filter:126 family=2 entries=8 op=nft_register_rule pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:02.177746 kernel: audit: type=1300 audit(1719327482.138:324): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffce8e4460 a2=0 a3=1 items=0 ppid=3220 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:02.138000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:02.191136 kernel: audit: type=1327 audit(1719327482.138:324): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:02.140000 audit[5260]: NETFILTER_CFG table=nat:127 family=2 entries=34 op=nft_register_chain pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:02.204314 kernel: audit: type=1325 audit(1719327482.140:325): table=nat:127 family=2 entries=34 op=nft_register_chain pid=5260 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:58:02.140000 audit[5260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffce8e4460 a2=0 a3=1 items=0 ppid=3220 pid=5260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:02.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:58:10.640390 systemd[1]: run-containerd-runc-k8s.io-71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c-runc.LIODrt.mount: Deactivated successfully. Jun 25 14:58:11.233768 systemd[1]: run-containerd-runc-k8s.io-71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c-runc.ZigiGz.mount: Deactivated successfully. Jun 25 14:58:12.607102 systemd[1]: run-containerd-runc-k8s.io-d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203-runc.4SFt9D.mount: Deactivated successfully. Jun 25 14:58:21.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.26:22-10.200.16.10:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:21.910633 systemd[1]: Started sshd@7-10.200.20.26:22-10.200.16.10:52154.service - OpenSSH per-connection server daemon (10.200.16.10:52154). Jun 25 14:58:21.914633 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 14:58:21.914713 kernel: audit: type=1130 audit(1719327501.910:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.26:22-10.200.16.10:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:22.345000 audit[5348]: USER_ACCT pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.346973 sshd[5348]: Accepted publickey for core from 10.200.16.10 port 52154 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:22.369359 kernel: audit: type=1101 audit(1719327502.345:327): pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.368000 audit[5348]: CRED_ACQ pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.370554 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:22.391338 kernel: audit: type=1103 audit(1719327502.368:328): pid=5348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.396878 systemd-logind[1576]: New session 10 of user core. Jun 25 14:58:22.435676 kernel: audit: type=1006 audit(1719327502.369:329): pid=5348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 14:58:22.435713 kernel: audit: type=1300 audit(1719327502.369:329): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe222b690 a2=3 a3=1 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:22.435734 kernel: audit: type=1327 audit(1719327502.369:329): proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:22.369000 audit[5348]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe222b690 a2=3 a3=1 items=0 ppid=1 pid=5348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:22.369000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:22.435692 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:58:22.440000 audit[5348]: USER_START pid=5348 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.466000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.484966 kernel: audit: type=1105 audit(1719327502.440:330): pid=5348 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.485085 kernel: audit: type=1103 audit(1719327502.466:331): pid=5351 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.810805 sshd[5348]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:22.811000 audit[5348]: USER_END pid=5348 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.836943 systemd[1]: sshd@7-10.200.20.26:22-10.200.16.10:52154.service: Deactivated successfully. Jun 25 14:58:22.811000 audit[5348]: CRED_DISP pid=5348 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.838705 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:58:22.839390 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:58:22.840932 systemd-logind[1576]: Removed session 10. Jun 25 14:58:22.857083 kernel: audit: type=1106 audit(1719327502.811:332): pid=5348 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.857188 kernel: audit: type=1104 audit(1719327502.811:333): pid=5348 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:22.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.26:22-10.200.16.10:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:24.529828 containerd[1604]: time="2024-06-25T14:58:24.529498866Z" level=info msg="StopPodSandbox for \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\"" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.566 [WARNING][5380] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"82c03bfc-edfe-48b6-9a13-261014350513", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736", Pod:"coredns-5dd5756b68-45cwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b03d1bb80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.566 [INFO][5380] k8s.go 608: Cleaning up netns ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.566 [INFO][5380] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" iface="eth0" netns="" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.566 [INFO][5380] k8s.go 615: Releasing IP address(es) ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.566 [INFO][5380] utils.go 188: Calico CNI releasing IP address ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.587 [INFO][5386] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.587 [INFO][5386] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.587 [INFO][5386] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.596 [WARNING][5386] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.596 [INFO][5386] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.597 [INFO][5386] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:24.602528 containerd[1604]: 2024-06-25 14:58:24.600 [INFO][5380] k8s.go 621: Teardown processing complete. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.602528 containerd[1604]: time="2024-06-25T14:58:24.602298033Z" level=info msg="TearDown network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\" successfully" Jun 25 14:58:24.602528 containerd[1604]: time="2024-06-25T14:58:24.602332111Z" level=info msg="StopPodSandbox for \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\" returns successfully" Jun 25 14:58:24.604282 containerd[1604]: time="2024-06-25T14:58:24.603208399Z" level=info msg="RemovePodSandbox for \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\"" Jun 25 14:58:24.604282 containerd[1604]: time="2024-06-25T14:58:24.603258915Z" level=info msg="Forcibly stopping sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\"" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.645 [WARNING][5404] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"82c03bfc-edfe-48b6-9a13-261014350513", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"c94f4d727b8b8e8e43737303cb1757afd9448e540936023988cd1e5d6ba38736", Pod:"coredns-5dd5756b68-45cwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b03d1bb80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.645 [INFO][5404] k8s.go 608: Cleaning up netns ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.645 [INFO][5404] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" iface="eth0" netns="" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.645 [INFO][5404] k8s.go 615: Releasing IP address(es) ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.645 [INFO][5404] utils.go 188: Calico CNI releasing IP address ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.666 [INFO][5410] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.666 [INFO][5410] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.666 [INFO][5410] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.675 [WARNING][5410] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.675 [INFO][5410] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" HandleID="k8s-pod-network.0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--45cwv-eth0" Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.677 [INFO][5410] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:24.679650 containerd[1604]: 2024-06-25 14:58:24.678 [INFO][5404] k8s.go 621: Teardown processing complete. ContainerID="0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7" Jun 25 14:58:24.680299 containerd[1604]: time="2024-06-25T14:58:24.680244739Z" level=info msg="TearDown network for sandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\" successfully" Jun 25 14:58:24.693352 containerd[1604]: time="2024-06-25T14:58:24.693319110Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:58:24.693587 containerd[1604]: time="2024-06-25T14:58:24.693562210Z" level=info msg="RemovePodSandbox \"0c37d9b5cdf2736fee1fef14a17a1e0144ec66ee9d56506ef9bf51228a2e48a7\" returns successfully" Jun 25 14:58:24.694052 containerd[1604]: time="2024-06-25T14:58:24.694017613Z" level=info msg="StopPodSandbox for \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\"" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.738 [WARNING][5429] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a", Pod:"coredns-5dd5756b68-r4gbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0ad7032e40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.738 [INFO][5429] k8s.go 608: Cleaning up netns ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.738 [INFO][5429] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" iface="eth0" netns="" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.738 [INFO][5429] k8s.go 615: Releasing IP address(es) ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.738 [INFO][5429] utils.go 188: Calico CNI releasing IP address ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.759 [INFO][5435] ipam_plugin.go 411: Releasing address using handleID ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.759 [INFO][5435] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.759 [INFO][5435] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.766 [WARNING][5435] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.766 [INFO][5435] ipam_plugin.go 439: Releasing address using workloadID ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.769 [INFO][5435] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:24.771230 containerd[1604]: 2024-06-25 14:58:24.770 [INFO][5429] k8s.go 621: Teardown processing complete. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.771727 containerd[1604]: time="2024-06-25T14:58:24.771264976Z" level=info msg="TearDown network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\" successfully" Jun 25 14:58:24.771727 containerd[1604]: time="2024-06-25T14:58:24.771313572Z" level=info msg="StopPodSandbox for \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\" returns successfully" Jun 25 14:58:24.771783 containerd[1604]: time="2024-06-25T14:58:24.771760416Z" level=info msg="RemovePodSandbox for \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\"" Jun 25 14:58:24.771848 containerd[1604]: time="2024-06-25T14:58:24.771790213Z" level=info msg="Forcibly stopping sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\"" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.809 [WARNING][5454] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"2620b3d9-2ddf-4eee-abb9-1f9e2c9332ae", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"a691152ff971bef61a62bf96f6e49614445aefddfd9257096bfd404a45aeda8a", Pod:"coredns-5dd5756b68-r4gbf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0ad7032e40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.809 [INFO][5454] k8s.go 608: Cleaning up netns ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.809 [INFO][5454] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" iface="eth0" netns="" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.809 [INFO][5454] k8s.go 615: Releasing IP address(es) ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.809 [INFO][5454] utils.go 188: Calico CNI releasing IP address ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.828 [INFO][5460] ipam_plugin.go 411: Releasing address using handleID ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.828 [INFO][5460] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.828 [INFO][5460] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.836 [WARNING][5460] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.836 [INFO][5460] ipam_plugin.go 439: Releasing address using workloadID ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" HandleID="k8s-pod-network.410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-coredns--5dd5756b68--r4gbf-eth0" Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.837 [INFO][5460] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:24.839953 containerd[1604]: 2024-06-25 14:58:24.838 [INFO][5454] k8s.go 621: Teardown processing complete. ContainerID="410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0" Jun 25 14:58:24.840493 containerd[1604]: time="2024-06-25T14:58:24.840460198Z" level=info msg="TearDown network for sandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\" successfully" Jun 25 14:58:24.848304 containerd[1604]: time="2024-06-25T14:58:24.848081575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:58:24.848304 containerd[1604]: time="2024-06-25T14:58:24.848179047Z" level=info msg="RemovePodSandbox \"410f0a45fa08e90015dcc92d1a6a89673c9c6d3e12721fe948c6c753ce4e4bd0\" returns successfully" Jun 25 14:58:24.849176 containerd[1604]: time="2024-06-25T14:58:24.849153927Z" level=info msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\"" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.895 [WARNING][5479] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7813833d-a993-419c-9c27-7d6c8ce9f5ba", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a", Pod:"csi-node-driver-6rz48", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib54124dd017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.896 [INFO][5479] k8s.go 608: Cleaning up netns ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.896 [INFO][5479] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" iface="eth0" netns="" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.896 [INFO][5479] k8s.go 615: Releasing IP address(es) ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.896 [INFO][5479] utils.go 188: Calico CNI releasing IP address ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.936 [INFO][5485] ipam_plugin.go 411: Releasing address using handleID ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.937 [INFO][5485] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.937 [INFO][5485] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.945 [WARNING][5485] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.945 [INFO][5485] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.946 [INFO][5485] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:24.950718 containerd[1604]: 2024-06-25 14:58:24.949 [INFO][5479] k8s.go 621: Teardown processing complete. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:24.951222 containerd[1604]: time="2024-06-25T14:58:24.951189463Z" level=info msg="TearDown network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" successfully" Jun 25 14:58:24.951317 containerd[1604]: time="2024-06-25T14:58:24.951270777Z" level=info msg="StopPodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" returns successfully" Jun 25 14:58:24.951949 containerd[1604]: time="2024-06-25T14:58:24.951926883Z" level=info msg="RemovePodSandbox for \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\"" Jun 25 14:58:24.952103 containerd[1604]: time="2024-06-25T14:58:24.952063392Z" level=info msg="Forcibly stopping sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\"" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:24.993 [WARNING][5503] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7813833d-a993-419c-9c27-7d6c8ce9f5ba", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"6e0ee2e9d74ed53fc896444f88154533294b3708aed60b0628ccb898ac8ae69a", Pod:"csi-node-driver-6rz48", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.97.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib54124dd017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:24.993 [INFO][5503] k8s.go 608: Cleaning up netns ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:24.993 [INFO][5503] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" iface="eth0" netns="" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:24.993 [INFO][5503] k8s.go 615: Releasing IP address(es) ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:24.993 [INFO][5503] utils.go 188: Calico CNI releasing IP address ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:25.012 [INFO][5509] ipam_plugin.go 411: Releasing address using handleID ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:25.012 [INFO][5509] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:25.012 [INFO][5509] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:25.021 [WARNING][5509] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:25.021 [INFO][5509] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" HandleID="k8s-pod-network.3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-csi--node--driver--6rz48-eth0" Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:25.023 [INFO][5509] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:25.025573 containerd[1604]: 2024-06-25 14:58:25.024 [INFO][5503] k8s.go 621: Teardown processing complete. ContainerID="3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56" Jun 25 14:58:25.026097 containerd[1604]: time="2024-06-25T14:58:25.026054450Z" level=info msg="TearDown network for sandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" successfully" Jun 25 14:58:25.034150 containerd[1604]: time="2024-06-25T14:58:25.034108321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:58:25.034383 containerd[1604]: time="2024-06-25T14:58:25.034361100Z" level=info msg="RemovePodSandbox \"3a78466f6e80af213020d38d27465e8a7279554f674c9400b0f1053f258ffb56\" returns successfully" Jun 25 14:58:25.035026 containerd[1604]: time="2024-06-25T14:58:25.034974571Z" level=info msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\"" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.068 [WARNING][5527] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0", GenerateName:"calico-kube-controllers-795866d7b8-", Namespace:"calico-system", SelfLink:"", UID:"a5766523-eff6-42b0-8a56-15ca01f02ba3", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795866d7b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98", Pod:"calico-kube-controllers-795866d7b8-cmg8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie69df58ca20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.069 [INFO][5527] k8s.go 608: Cleaning up netns ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.069 [INFO][5527] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" iface="eth0" netns="" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.069 [INFO][5527] k8s.go 615: Releasing IP address(es) ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.069 [INFO][5527] utils.go 188: Calico CNI releasing IP address ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.095 [INFO][5533] ipam_plugin.go 411: Releasing address using handleID ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.095 [INFO][5533] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.095 [INFO][5533] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.104 [WARNING][5533] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.104 [INFO][5533] ipam_plugin.go 439: Releasing address using workloadID ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.106 [INFO][5533] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:25.109355 containerd[1604]: 2024-06-25 14:58:25.107 [INFO][5527] k8s.go 621: Teardown processing complete. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.109872 containerd[1604]: time="2024-06-25T14:58:25.109410770Z" level=info msg="TearDown network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" successfully" Jun 25 14:58:25.109872 containerd[1604]: time="2024-06-25T14:58:25.109448207Z" level=info msg="StopPodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" returns successfully" Jun 25 14:58:25.110142 containerd[1604]: time="2024-06-25T14:58:25.110097395Z" level=info msg="RemovePodSandbox for \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\"" Jun 25 14:58:25.110191 containerd[1604]: time="2024-06-25T14:58:25.110148550Z" level=info msg="Forcibly stopping sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\"" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.145 [WARNING][5551] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0", GenerateName:"calico-kube-controllers-795866d7b8-", Namespace:"calico-system", SelfLink:"", UID:"a5766523-eff6-42b0-8a56-15ca01f02ba3", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 56, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"795866d7b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3815.2.4-a-2c7c8223bb", ContainerID:"00000aee2cbb0c3d0e584beaa9a0e49e97e93a6fc03c7b9008a514f42c455f98", Pod:"calico-kube-controllers-795866d7b8-cmg8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie69df58ca20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.145 [INFO][5551] k8s.go 608: Cleaning up netns ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.145 [INFO][5551] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" iface="eth0" netns="" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.146 [INFO][5551] k8s.go 615: Releasing IP address(es) ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.146 [INFO][5551] utils.go 188: Calico CNI releasing IP address ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.165 [INFO][5557] ipam_plugin.go 411: Releasing address using handleID ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.165 [INFO][5557] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.165 [INFO][5557] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.176 [WARNING][5557] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.177 [INFO][5557] ipam_plugin.go 439: Releasing address using workloadID ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" HandleID="k8s-pod-network.33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Workload="ci--3815.2.4--a--2c7c8223bb-k8s-calico--kube--controllers--795866d7b8--cmg8b-eth0" Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.178 [INFO][5557] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:58:25.180770 containerd[1604]: 2024-06-25 14:58:25.179 [INFO][5551] k8s.go 621: Teardown processing complete. ContainerID="33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1" Jun 25 14:58:25.181248 containerd[1604]: time="2024-06-25T14:58:25.180813293Z" level=info msg="TearDown network for sandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" successfully" Jun 25 14:58:25.190130 containerd[1604]: time="2024-06-25T14:58:25.190080546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:58:25.190261 containerd[1604]: time="2024-06-25T14:58:25.190183778Z" level=info msg="RemovePodSandbox \"33ef85299f9008fa2edcfb35515fa7cd2e575607c3dc963aad016a76ef3a1cb1\" returns successfully" Jun 25 14:58:27.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.26:22-10.200.16.10:50048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:27.890733 systemd[1]: Started sshd@8-10.200.20.26:22-10.200.16.10:50048.service - OpenSSH per-connection server daemon (10.200.16.10:50048). Jun 25 14:58:27.895314 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:58:27.895437 kernel: audit: type=1130 audit(1719327507.889:335): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.26:22-10.200.16.10:50048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:28.352000 audit[5563]: USER_ACCT pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.354501 sshd[5563]: Accepted publickey for core from 10.200.16.10 port 50048 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:28.376316 kernel: audit: type=1101 audit(1719327508.352:336): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.375000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.377193 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:28.399266 systemd-logind[1576]: New session 11 of user core. Jun 25 14:58:28.430119 kernel: audit: type=1103 audit(1719327508.375:337): pid=5563 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.430158 kernel: audit: type=1006 audit(1719327508.375:338): pid=5563 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 14:58:28.430178 kernel: audit: type=1300 audit(1719327508.375:338): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2c887d0 a2=3 a3=1 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:28.375000 audit[5563]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2c887d0 a2=3 a3=1 items=0 ppid=1 pid=5563 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:28.429652 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:58:28.375000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:28.437799 kernel: audit: type=1327 audit(1719327508.375:338): proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:28.434000 audit[5563]: USER_START pid=5563 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.461188 kernel: audit: type=1105 audit(1719327508.434:339): pid=5563 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.436000 audit[5566]: CRED_ACQ pid=5566 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.480817 kernel: audit: type=1103 audit(1719327508.436:340): pid=5566 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.780530 sshd[5563]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:28.781000 audit[5563]: USER_END pid=5563 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.784350 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:58:28.787856 systemd[1]: sshd@8-10.200.20.26:22-10.200.16.10:50048.service: Deactivated successfully. Jun 25 14:58:28.788822 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:58:28.790401 systemd-logind[1576]: Removed session 11. Jun 25 14:58:28.781000 audit[5563]: CRED_DISP pid=5563 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.825305 kernel: audit: type=1106 audit(1719327508.781:341): pid=5563 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.825412 kernel: audit: type=1104 audit(1719327508.781:342): pid=5563 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:28.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.26:22-10.200.16.10:50048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:33.860839 systemd[1]: Started sshd@9-10.200.20.26:22-10.200.16.10:50056.service - OpenSSH per-connection server daemon (10.200.16.10:50056). Jun 25 14:58:33.883555 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:58:33.883665 kernel: audit: type=1130 audit(1719327513.860:344): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.26:22-10.200.16.10:50056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:33.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.26:22-10.200.16.10:50056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:34.324000 audit[5578]: USER_ACCT pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.326433 sshd[5578]: Accepted publickey for core from 10.200.16.10 port 50056 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:34.327252 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:34.326000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.350729 systemd-logind[1576]: New session 12 of user core. Jun 25 14:58:34.403106 kernel: audit: type=1101 audit(1719327514.324:345): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.403140 kernel: audit: type=1103 audit(1719327514.326:346): pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.403159 kernel: audit: type=1006 audit(1719327514.326:347): pid=5578 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jun 25 14:58:34.403177 kernel: audit: type=1300 audit(1719327514.326:347): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4a935d0 a2=3 a3=1 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:34.326000 audit[5578]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4a935d0 a2=3 a3=1 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:34.402650 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:58:34.326000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:34.412912 kernel: audit: type=1327 audit(1719327514.326:347): proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:34.413014 kernel: audit: type=1105 audit(1719327514.409:348): pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.409000 audit[5578]: USER_START pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.411000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.453327 kernel: audit: type=1103 audit(1719327514.411:349): pid=5581 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.729857 sshd[5578]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:34.730000 audit[5578]: USER_END pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.733301 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:58:34.734668 systemd[1]: sshd@9-10.200.20.26:22-10.200.16.10:50056.service: Deactivated successfully. Jun 25 14:58:34.735556 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:58:34.737114 systemd-logind[1576]: Removed session 12. Jun 25 14:58:34.730000 audit[5578]: CRED_DISP pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.773674 kernel: audit: type=1106 audit(1719327514.730:350): pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.773790 kernel: audit: type=1104 audit(1719327514.730:351): pid=5578 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:34.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.26:22-10.200.16.10:50056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:34.806696 systemd[1]: Started sshd@10-10.200.20.26:22-10.200.16.10:55072.service - OpenSSH per-connection server daemon (10.200.16.10:55072). Jun 25 14:58:34.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.26:22-10.200.16.10:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:35.231000 audit[5591]: USER_ACCT pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:35.231835 sshd[5591]: Accepted publickey for core from 10.200.16.10 port 55072 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:35.232000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:35.232000 audit[5591]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe203d360 a2=3 a3=1 items=0 ppid=1 pid=5591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:35.232000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:35.233582 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:35.239272 systemd-logind[1576]: New session 13 of user core. Jun 25 14:58:35.241533 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:58:35.245000 audit[5591]: USER_START pid=5591 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:35.247000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:36.239711 sshd[5591]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:36.240000 audit[5591]: USER_END pid=5591 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:36.240000 audit[5591]: CRED_DISP pid=5591 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:36.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.26:22-10.200.16.10:55072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:36.242421 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:58:36.242571 systemd[1]: sshd@10-10.200.20.26:22-10.200.16.10:55072.service: Deactivated successfully. Jun 25 14:58:36.243448 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:58:36.243911 systemd-logind[1576]: Removed session 13. Jun 25 14:58:36.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.26:22-10.200.16.10:55078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:36.319631 systemd[1]: Started sshd@11-10.200.20.26:22-10.200.16.10:55078.service - OpenSSH per-connection server daemon (10.200.16.10:55078). Jun 25 14:58:36.743000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:36.744757 sshd[5607]: Accepted publickey for core from 10.200.16.10 port 55078 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:36.744000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:36.744000 audit[5607]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee4ff270 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:36.744000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:36.746509 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:36.752173 systemd-logind[1576]: New session 14 of user core. Jun 25 14:58:36.758538 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:58:36.763000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:36.764000 audit[5612]: CRED_ACQ pid=5612 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:37.132223 sshd[5607]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:37.132000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:37.132000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:37.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.26:22-10.200.16.10:55078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:37.135439 systemd[1]: sshd@11-10.200.20.26:22-10.200.16.10:55078.service: Deactivated successfully. Jun 25 14:58:37.136990 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:58:37.137865 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:58:37.139125 systemd-logind[1576]: Removed session 14. Jun 25 14:58:40.640951 systemd[1]: run-containerd-runc-k8s.io-71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c-runc.Zp3RQ9.mount: Deactivated successfully. Jun 25 14:58:42.214687 systemd[1]: Started sshd@12-10.200.20.26:22-10.200.16.10:55094.service - OpenSSH per-connection server daemon (10.200.16.10:55094). Jun 25 14:58:42.237567 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 14:58:42.237687 kernel: audit: type=1130 audit(1719327522.213:371): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.26:22-10.200.16.10:55094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:42.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.26:22-10.200.16.10:55094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:42.674542 sshd[5646]: Accepted publickey for core from 10.200.16.10 port 55094 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:42.673000 audit[5646]: USER_ACCT pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:42.696209 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:42.694000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:42.715953 kernel: audit: type=1101 audit(1719327522.673:372): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:42.716030 kernel: audit: type=1103 audit(1719327522.694:373): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:42.729535 kernel: audit: type=1006 audit(1719327522.694:374): pid=5646 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 14:58:42.694000 audit[5646]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdbf4160 a2=3 a3=1 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:42.733797 systemd-logind[1576]: New session 15 of user core. Jun 25 14:58:42.755035 kernel: audit: type=1300 audit(1719327522.694:374): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdbf4160 a2=3 a3=1 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:42.755083 kernel: audit: type=1327 audit(1719327522.694:374): proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:42.694000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:42.754627 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:58:42.759000 audit[5646]: USER_START pid=5646 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:42.783611 kernel: audit: type=1105 audit(1719327522.759:375): pid=5646 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:42.759000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:42.803749 kernel: audit: type=1103 audit(1719327522.759:376): pid=5671 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:43.118495 sshd[5646]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:43.118000 audit[5646]: USER_END pid=5646 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:43.121141 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:58:43.122974 systemd[1]: sshd@12-10.200.20.26:22-10.200.16.10:55094.service: Deactivated successfully. Jun 25 14:58:43.126586 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:58:43.128132 systemd-logind[1576]: Removed session 15. Jun 25 14:58:43.118000 audit[5646]: CRED_DISP pid=5646 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:43.163881 kernel: audit: type=1106 audit(1719327523.118:377): pid=5646 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:43.163975 kernel: audit: type=1104 audit(1719327523.118:378): pid=5646 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:43.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.26:22-10.200.16.10:55094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:48.215403 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:58:48.215548 kernel: audit: type=1130 audit(1719327528.192:380): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.26:22-10.200.16.10:34670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:48.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.26:22-10.200.16.10:34670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:48.192669 systemd[1]: Started sshd@13-10.200.20.26:22-10.200.16.10:34670.service - OpenSSH per-connection server daemon (10.200.16.10:34670). Jun 25 14:58:48.617000 audit[5686]: USER_ACCT pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:48.617907 sshd[5686]: Accepted publickey for core from 10.200.16.10 port 34670 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:48.638000 audit[5686]: CRED_ACQ pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:48.639684 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:48.658376 kernel: audit: type=1101 audit(1719327528.617:381): pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:48.658492 kernel: audit: type=1103 audit(1719327528.638:382): pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:48.663707 systemd-logind[1576]: New session 16 of user core. Jun 25 14:58:48.695227 kernel: audit: type=1006 audit(1719327528.638:383): pid=5686 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 14:58:48.695271 kernel: audit: type=1300 audit(1719327528.638:383): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc71103f0 a2=3 a3=1 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:48.695315 kernel: audit: type=1327 audit(1719327528.638:383): proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:48.638000 audit[5686]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc71103f0 a2=3 a3=1 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:48.638000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:48.694714 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:58:48.706000 audit[5686]: USER_START pid=5686 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:48.708000 audit[5689]: CRED_ACQ pid=5689 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:48.750178 kernel: audit: type=1105 audit(1719327528.706:384): pid=5686 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:48.750393 kernel: audit: type=1103 audit(1719327528.708:385): pid=5689 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:49.010955 sshd[5686]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:49.012000 audit[5686]: USER_END pid=5686 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:49.014617 systemd[1]: sshd@13-10.200.20.26:22-10.200.16.10:34670.service: Deactivated successfully. Jun 25 14:58:49.015496 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:58:49.012000 audit[5686]: CRED_DISP pid=5686 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:49.037204 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:58:49.038164 systemd-logind[1576]: Removed session 16. Jun 25 14:58:49.055176 kernel: audit: type=1106 audit(1719327529.012:386): pid=5686 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:49.055279 kernel: audit: type=1104 audit(1719327529.012:387): pid=5686 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:49.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.26:22-10.200.16.10:34670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:54.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.26:22-10.200.16.10:34678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:54.091830 systemd[1]: Started sshd@14-10.200.20.26:22-10.200.16.10:34678.service - OpenSSH per-connection server daemon (10.200.16.10:34678). Jun 25 14:58:54.095861 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:58:54.095930 kernel: audit: type=1130 audit(1719327534.090:389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.26:22-10.200.16.10:34678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:58:54.557000 audit[5699]: USER_ACCT pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.560209 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:58:54.561181 sshd[5699]: Accepted publickey for core from 10.200.16.10 port 34678 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:58:54.558000 audit[5699]: CRED_ACQ pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.581312 kernel: audit: type=1101 audit(1719327534.557:390): pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.614578 kernel: audit: type=1103 audit(1719327534.558:391): pid=5699 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.614711 kernel: audit: type=1006 audit(1719327534.558:392): pid=5699 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 14:58:54.558000 audit[5699]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7495b80 a2=3 a3=1 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:54.618135 systemd-logind[1576]: New session 17 of user core. Jun 25 14:58:54.650500 kernel: audit: type=1300 audit(1719327534.558:392): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7495b80 a2=3 a3=1 items=0 ppid=1 pid=5699 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:58:54.650532 kernel: audit: type=1327 audit(1719327534.558:392): proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:54.558000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:58:54.649630 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:58:54.653000 audit[5699]: USER_START pid=5699 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.655000 audit[5708]: CRED_ACQ pid=5708 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.702159 kernel: audit: type=1105 audit(1719327534.653:393): pid=5699 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.702276 kernel: audit: type=1103 audit(1719327534.655:394): pid=5708 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.992000 audit[5699]: USER_END pid=5699 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.996105 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:58:54.992895 sshd[5699]: pam_unix(sshd:session): session closed for user core Jun 25 14:58:54.997344 systemd[1]: sshd@14-10.200.20.26:22-10.200.16.10:34678.service: Deactivated successfully. Jun 25 14:58:54.998200 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:58:54.999436 systemd-logind[1576]: Removed session 17. Jun 25 14:58:54.992000 audit[5699]: CRED_DISP pid=5699 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:55.037980 kernel: audit: type=1106 audit(1719327534.992:395): pid=5699 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:55.038093 kernel: audit: type=1104 audit(1719327534.992:396): pid=5699 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:58:54.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.26:22-10.200.16.10:34678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:00.074686 systemd[1]: Started sshd@15-10.200.20.26:22-10.200.16.10:38480.service - OpenSSH per-connection server daemon (10.200.16.10:38480). Jun 25 14:59:00.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.26:22-10.200.16.10:38480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:00.079454 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:00.079529 kernel: audit: type=1130 audit(1719327540.074:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.26:22-10.200.16.10:38480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:00.534000 audit[5722]: USER_ACCT pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.536034 sshd[5722]: Accepted publickey for core from 10.200.16.10 port 38480 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:00.557313 kernel: audit: type=1101 audit(1719327540.534:399): pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.556000 audit[5722]: CRED_ACQ pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.558236 sshd[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:00.563799 systemd-logind[1576]: New session 18 of user core. Jun 25 14:59:00.579691 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:59:00.589783 kernel: audit: type=1103 audit(1719327540.556:400): pid=5722 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.589876 kernel: audit: type=1006 audit(1719327540.556:401): pid=5722 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jun 25 14:59:00.556000 audit[5722]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd42bd20 a2=3 a3=1 items=0 ppid=1 pid=5722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:00.610456 kernel: audit: type=1300 audit(1719327540.556:401): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd42bd20 a2=3 a3=1 items=0 ppid=1 pid=5722 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:00.556000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:00.618850 kernel: audit: type=1327 audit(1719327540.556:401): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:00.584000 audit[5722]: USER_START pid=5722 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.640979 kernel: audit: type=1105 audit(1719327540.584:402): pid=5722 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.590000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.659747 kernel: audit: type=1103 audit(1719327540.590:403): pid=5725 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.941729 sshd[5722]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:00.942000 audit[5722]: USER_END pid=5722 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.945225 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:59:00.946657 systemd[1]: sshd@15-10.200.20.26:22-10.200.16.10:38480.service: Deactivated successfully. Jun 25 14:59:00.947505 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:59:00.949229 systemd-logind[1576]: Removed session 18. Jun 25 14:59:00.942000 audit[5722]: CRED_DISP pid=5722 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.985945 kernel: audit: type=1106 audit(1719327540.942:404): pid=5722 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.986057 kernel: audit: type=1104 audit(1719327540.942:405): pid=5722 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:00.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.26:22-10.200.16.10:38480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:06.022796 systemd[1]: Started sshd@16-10.200.20.26:22-10.200.16.10:35530.service - OpenSSH per-connection server daemon (10.200.16.10:35530). Jun 25 14:59:06.032523 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:06.032627 kernel: audit: type=1130 audit(1719327546.021:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.26:22-10.200.16.10:35530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:06.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.26:22-10.200.16.10:35530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:06.486000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.488160 sshd[5748]: Accepted publickey for core from 10.200.16.10 port 35530 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:06.511323 kernel: audit: type=1101 audit(1719327546.486:408): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.511396 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:06.509000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.544608 kernel: audit: type=1103 audit(1719327546.509:409): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.544711 kernel: audit: type=1006 audit(1719327546.509:410): pid=5748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jun 25 14:59:06.509000 audit[5748]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3be9710 a2=3 a3=1 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:06.548487 systemd-logind[1576]: New session 19 of user core. Jun 25 14:59:06.579032 kernel: audit: type=1300 audit(1719327546.509:410): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3be9710 a2=3 a3=1 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:06.579067 kernel: audit: type=1327 audit(1719327546.509:410): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:06.509000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:06.578616 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:59:06.583000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.585000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.626660 kernel: audit: type=1105 audit(1719327546.583:411): pid=5748 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.626763 kernel: audit: type=1103 audit(1719327546.585:412): pid=5751 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.913853 sshd[5748]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:06.914000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.916000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.940484 systemd[1]: sshd@16-10.200.20.26:22-10.200.16.10:35530.service: Deactivated successfully. Jun 25 14:59:06.941396 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:59:06.942945 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:59:06.944147 systemd-logind[1576]: Removed session 19. Jun 25 14:59:06.958920 kernel: audit: type=1106 audit(1719327546.914:413): pid=5748 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.959023 kernel: audit: type=1104 audit(1719327546.916:414): pid=5748 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:06.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.26:22-10.200.16.10:35530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:06.988639 systemd[1]: Started sshd@17-10.200.20.26:22-10.200.16.10:35536.service - OpenSSH per-connection server daemon (10.200.16.10:35536). Jun 25 14:59:06.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.26:22-10.200.16.10:35536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:07.412000 audit[5765]: USER_ACCT pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:07.413245 sshd[5765]: Accepted publickey for core from 10.200.16.10 port 35536 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:07.414000 audit[5765]: CRED_ACQ pid=5765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:07.414000 audit[5765]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdf486f30 a2=3 a3=1 items=0 ppid=1 pid=5765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:07.414000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:07.415008 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:07.419366 systemd-logind[1576]: New session 20 of user core. Jun 25 14:59:07.422521 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:59:07.426000 audit[5765]: USER_START pid=5765 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:07.428000 audit[5768]: CRED_ACQ pid=5768 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:07.876513 sshd[5765]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:07.877000 audit[5765]: USER_END pid=5765 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:07.877000 audit[5765]: CRED_DISP pid=5765 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:07.879618 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:59:07.880332 systemd[1]: sshd@17-10.200.20.26:22-10.200.16.10:35536.service: Deactivated successfully. Jun 25 14:59:07.881260 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:59:07.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.26:22-10.200.16.10:35536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:07.882429 systemd-logind[1576]: Removed session 20. Jun 25 14:59:07.954658 systemd[1]: Started sshd@18-10.200.20.26:22-10.200.16.10:35548.service - OpenSSH per-connection server daemon (10.200.16.10:35548). Jun 25 14:59:07.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.26:22-10.200.16.10:35548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:08.385000 audit[5776]: USER_ACCT pid=5776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:08.387340 sshd[5776]: Accepted publickey for core from 10.200.16.10 port 35548 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:08.387000 audit[5776]: CRED_ACQ pid=5776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:08.387000 audit[5776]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa5b4e10 a2=3 a3=1 items=0 ppid=1 pid=5776 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:08.387000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:08.387902 sshd[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:08.392907 systemd-logind[1576]: New session 21 of user core. Jun 25 14:59:08.398574 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 14:59:08.402000 audit[5776]: USER_START pid=5776 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:08.404000 audit[5779]: CRED_ACQ pid=5779 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:09.518000 audit[5793]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:59:09.518000 audit[5793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffc2705750 a2=0 a3=1 items=0 ppid=3220 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:09.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:59:09.519000 audit[5793]: NETFILTER_CFG table=nat:129 family=2 entries=22 op=nft_register_rule pid=5793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:59:09.519000 audit[5793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffc2705750 a2=0 a3=1 items=0 ppid=3220 pid=5793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:09.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:59:09.536000 audit[5796]: NETFILTER_CFG table=filter:130 family=2 entries=32 op=nft_register_rule pid=5796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:59:09.536000 audit[5796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd046a920 a2=0 a3=1 items=0 ppid=3220 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:09.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:59:09.538000 audit[5796]: NETFILTER_CFG table=nat:131 family=2 entries=22 op=nft_register_rule pid=5796 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:59:09.538000 audit[5796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffd046a920 a2=0 a3=1 items=0 ppid=3220 pid=5796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:09.538000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:59:09.593677 sshd[5776]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:09.595000 audit[5776]: USER_END pid=5776 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:09.595000 audit[5776]: CRED_DISP pid=5776 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:09.596904 systemd[1]: sshd@18-10.200.20.26:22-10.200.16.10:35548.service: Deactivated successfully. Jun 25 14:59:09.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.26:22-10.200.16.10:35548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:09.598056 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 14:59:09.598435 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. Jun 25 14:59:09.599233 systemd-logind[1576]: Removed session 21. Jun 25 14:59:09.672620 systemd[1]: Started sshd@19-10.200.20.26:22-10.200.16.10:35562.service - OpenSSH per-connection server daemon (10.200.16.10:35562). Jun 25 14:59:09.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.26:22-10.200.16.10:35562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:10.131000 audit[5799]: USER_ACCT pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:10.132007 sshd[5799]: Accepted publickey for core from 10.200.16.10 port 35562 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:10.133658 sshd[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:10.132000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:10.133000 audit[5799]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe64f7a60 a2=3 a3=1 items=0 ppid=1 pid=5799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:10.133000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:10.137861 systemd-logind[1576]: New session 22 of user core. Jun 25 14:59:10.140552 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 14:59:10.149000 audit[5799]: USER_START pid=5799 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:10.151000 audit[5802]: CRED_ACQ pid=5802 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:10.719870 sshd[5799]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:10.720000 audit[5799]: USER_END pid=5799 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:10.720000 audit[5799]: CRED_DISP pid=5799 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:10.722754 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. Jun 25 14:59:10.722899 systemd[1]: sshd@19-10.200.20.26:22-10.200.16.10:35562.service: Deactivated successfully. Jun 25 14:59:10.723787 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 14:59:10.724318 systemd-logind[1576]: Removed session 22. Jun 25 14:59:10.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.26:22-10.200.16.10:35562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:10.801784 systemd[1]: Started sshd@20-10.200.20.26:22-10.200.16.10:35566.service - OpenSSH per-connection server daemon (10.200.16.10:35566). Jun 25 14:59:10.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.26:22-10.200.16.10:35566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:11.239416 systemd[1]: run-containerd-runc-k8s.io-71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c-runc.NpTYjq.mount: Deactivated successfully. Jun 25 14:59:11.265000 audit[5832]: USER_ACCT pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.265630 sshd[5832]: Accepted publickey for core from 10.200.16.10 port 35566 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:11.269639 kernel: kauditd_printk_skb: 47 callbacks suppressed Jun 25 14:59:11.269703 kernel: audit: type=1101 audit(1719327551.265:448): pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.273713 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:11.269000 audit[5832]: CRED_ACQ pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.312504 kernel: audit: type=1103 audit(1719327551.269:449): pid=5832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.325739 kernel: audit: type=1006 audit(1719327551.269:450): pid=5832 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 14:59:11.269000 audit[5832]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5201780 a2=3 a3=1 items=0 ppid=1 pid=5832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:11.347406 kernel: audit: type=1300 audit(1719327551.269:450): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5201780 a2=3 a3=1 items=0 ppid=1 pid=5832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:11.269000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:11.351626 systemd-logind[1576]: New session 23 of user core. Jun 25 14:59:11.355776 kernel: audit: type=1327 audit(1719327551.269:450): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:11.360579 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 14:59:11.365000 audit[5832]: USER_START pid=5832 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.367000 audit[5855]: CRED_ACQ pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.408169 kernel: audit: type=1105 audit(1719327551.365:451): pid=5832 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.408274 kernel: audit: type=1103 audit(1719327551.367:452): pid=5855 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.698506 sshd[5832]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:11.699000 audit[5832]: USER_END pid=5832 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.702329 systemd[1]: sshd@20-10.200.20.26:22-10.200.16.10:35566.service: Deactivated successfully. Jun 25 14:59:11.703244 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 14:59:11.700000 audit[5832]: CRED_DISP pid=5832 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.741862 kernel: audit: type=1106 audit(1719327551.699:453): pid=5832 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.741982 kernel: audit: type=1104 audit(1719327551.700:454): pid=5832 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:11.742261 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. Jun 25 14:59:11.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.26:22-10.200.16.10:35566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:11.744502 systemd-logind[1576]: Removed session 23. Jun 25 14:59:11.762517 kernel: audit: type=1131 audit(1719327551.702:455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.26:22-10.200.16.10:35566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:16.781646 systemd[1]: Started sshd@21-10.200.20.26:22-10.200.16.10:49812.service - OpenSSH per-connection server daemon (10.200.16.10:49812). Jun 25 14:59:16.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.26:22-10.200.16.10:49812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:16.802323 kernel: audit: type=1130 audit(1719327556.780:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.26:22-10.200.16.10:49812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:17.244000 audit[5890]: USER_ACCT pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.246390 sshd[5890]: Accepted publickey for core from 10.200.16.10 port 49812 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:17.266000 audit[5890]: CRED_ACQ pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.268403 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:17.289568 kernel: audit: type=1101 audit(1719327557.244:457): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.289655 kernel: audit: type=1103 audit(1719327557.266:458): pid=5890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.289683 kernel: audit: type=1006 audit(1719327557.266:459): pid=5890 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 14:59:17.266000 audit[5890]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcfbdf1c0 a2=3 a3=1 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:17.319813 kernel: audit: type=1300 audit(1719327557.266:459): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcfbdf1c0 a2=3 a3=1 items=0 ppid=1 pid=5890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:17.266000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:17.323902 systemd-logind[1576]: New session 24 of user core. Jun 25 14:59:17.334046 kernel: audit: type=1327 audit(1719327557.266:459): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:17.333704 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 14:59:17.337000 audit[5890]: USER_START pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.339000 audit[5893]: CRED_ACQ pid=5893 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.379790 kernel: audit: type=1105 audit(1719327557.337:460): pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.379895 kernel: audit: type=1103 audit(1719327557.339:461): pid=5893 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.676764 sshd[5890]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:17.677000 audit[5890]: USER_END pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.680997 systemd[1]: sshd@21-10.200.20.26:22-10.200.16.10:49812.service: Deactivated successfully. Jun 25 14:59:17.681906 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 14:59:17.701161 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. Jun 25 14:59:17.678000 audit[5890]: CRED_DISP pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.702430 systemd-logind[1576]: Removed session 24. Jun 25 14:59:17.719307 kernel: audit: type=1106 audit(1719327557.677:462): pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.719433 kernel: audit: type=1104 audit(1719327557.678:463): pid=5890 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:17.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.26:22-10.200.16.10:49812 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:22.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.26:22-10.200.16.10:49814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:22.765567 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:22.765639 kernel: audit: type=1130 audit(1719327562.757:465): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.26:22-10.200.16.10:49814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:22.758865 systemd[1]: Started sshd@22-10.200.20.26:22-10.200.16.10:49814.service - OpenSSH per-connection server daemon (10.200.16.10:49814). Jun 25 14:59:23.226000 audit[5909]: USER_ACCT pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.229504 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:23.230096 sshd[5909]: Accepted publickey for core from 10.200.16.10 port 49814 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:23.227000 audit[5909]: CRED_ACQ pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.259178 systemd-logind[1576]: New session 25 of user core. Jun 25 14:59:23.313755 kernel: audit: type=1101 audit(1719327563.226:466): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.313797 kernel: audit: type=1103 audit(1719327563.227:467): pid=5909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.313818 kernel: audit: type=1006 audit(1719327563.227:468): pid=5909 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 14:59:23.313836 kernel: audit: type=1300 audit(1719327563.227:468): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9047d60 a2=3 a3=1 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:23.313854 kernel: audit: type=1327 audit(1719327563.227:468): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:23.227000 audit[5909]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9047d60 a2=3 a3=1 items=0 ppid=1 pid=5909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:23.227000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:23.312813 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 14:59:23.317000 audit[5909]: USER_START pid=5909 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.319000 audit[5915]: CRED_ACQ pid=5915 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.361494 kernel: audit: type=1105 audit(1719327563.317:469): pid=5909 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.361630 kernel: audit: type=1103 audit(1719327563.319:470): pid=5915 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.653944 sshd[5909]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:23.654000 audit[5909]: USER_END pid=5909 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.658035 systemd[1]: sshd@22-10.200.20.26:22-10.200.16.10:49814.service: Deactivated successfully. Jun 25 14:59:23.658925 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 14:59:23.655000 audit[5909]: CRED_DISP pid=5909 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.681095 systemd-logind[1576]: Session 25 logged out. Waiting for processes to exit. Jun 25 14:59:23.682311 systemd-logind[1576]: Removed session 25. Jun 25 14:59:23.699743 kernel: audit: type=1106 audit(1719327563.654:471): pid=5909 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.699939 kernel: audit: type=1104 audit(1719327563.655:472): pid=5909 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:23.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.26:22-10.200.16.10:49814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:25.981000 audit[5928]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:59:25.981000 audit[5928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe3137010 a2=0 a3=1 items=0 ppid=3220 pid=5928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:25.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:59:25.983000 audit[5928]: NETFILTER_CFG table=nat:133 family=2 entries=106 op=nft_register_chain pid=5928 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:59:25.983000 audit[5928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffe3137010 a2=0 a3=1 items=0 ppid=3220 pid=5928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:25.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:59:28.732656 systemd[1]: Started sshd@23-10.200.20.26:22-10.200.16.10:55204.service - OpenSSH per-connection server daemon (10.200.16.10:55204). Jun 25 14:59:28.756682 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:59:28.756807 kernel: audit: type=1130 audit(1719327568.731:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.26:22-10.200.16.10:55204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:28.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.26:22-10.200.16.10:55204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:29.191000 audit[5935]: USER_ACCT pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.193365 sshd[5935]: Accepted publickey for core from 10.200.16.10 port 55204 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:29.214318 kernel: audit: type=1101 audit(1719327569.191:477): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.214000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.216073 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:29.247125 kernel: audit: type=1103 audit(1719327569.214:478): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.247229 kernel: audit: type=1006 audit(1719327569.214:479): pid=5935 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 14:59:29.214000 audit[5935]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffecd5af70 a2=3 a3=1 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:29.267962 kernel: audit: type=1300 audit(1719327569.214:479): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffecd5af70 a2=3 a3=1 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:29.214000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:29.274028 systemd-logind[1576]: New session 26 of user core. Jun 25 14:59:29.281736 kernel: audit: type=1327 audit(1719327569.214:479): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:29.281610 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 14:59:29.287000 audit[5935]: USER_START pid=5935 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.288000 audit[5938]: CRED_ACQ pid=5938 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.330483 kernel: audit: type=1105 audit(1719327569.287:480): pid=5935 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.330605 kernel: audit: type=1103 audit(1719327569.288:481): pid=5938 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.605864 sshd[5935]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:29.605000 audit[5935]: USER_END pid=5935 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.609706 systemd-logind[1576]: Session 26 logged out. Waiting for processes to exit. Jun 25 14:59:29.611202 systemd[1]: sshd@23-10.200.20.26:22-10.200.16.10:55204.service: Deactivated successfully. Jun 25 14:59:29.612123 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 14:59:29.613629 systemd-logind[1576]: Removed session 26. Jun 25 14:59:29.605000 audit[5935]: CRED_DISP pid=5935 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.648103 kernel: audit: type=1106 audit(1719327569.605:482): pid=5935 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.648225 kernel: audit: type=1104 audit(1719327569.605:483): pid=5935 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:29.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.26:22-10.200.16.10:55204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:34.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.26:22-10.200.16.10:44556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:34.679667 systemd[1]: Started sshd@24-10.200.20.26:22-10.200.16.10:44556.service - OpenSSH per-connection server daemon (10.200.16.10:44556). Jun 25 14:59:34.683757 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:34.683794 kernel: audit: type=1130 audit(1719327574.679:485): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.26:22-10.200.16.10:44556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:35.106000 audit[5948]: USER_ACCT pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.107505 sshd[5948]: Accepted publickey for core from 10.200.16.10 port 44556 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:35.109568 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:35.115973 systemd-logind[1576]: New session 27 of user core. Jun 25 14:59:35.163681 kernel: audit: type=1101 audit(1719327575.106:486): pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.163740 kernel: audit: type=1103 audit(1719327575.108:487): pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.163792 kernel: audit: type=1006 audit(1719327575.108:488): pid=5948 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 14:59:35.163815 kernel: audit: type=1300 audit(1719327575.108:488): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec087770 a2=3 a3=1 items=0 ppid=1 pid=5948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:35.108000 audit[5948]: CRED_ACQ pid=5948 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.108000 audit[5948]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffec087770 a2=3 a3=1 items=0 ppid=1 pid=5948 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:35.161185 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 14:59:35.108000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:35.190276 kernel: audit: type=1327 audit(1719327575.108:488): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:35.184000 audit[5948]: USER_START pid=5948 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.214401 kernel: audit: type=1105 audit(1719327575.184:489): pid=5948 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.187000 audit[5951]: CRED_ACQ pid=5951 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.234134 kernel: audit: type=1103 audit(1719327575.187:490): pid=5951 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.488511 sshd[5948]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:35.489000 audit[5948]: USER_END pid=5948 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.492372 systemd-logind[1576]: Session 27 logged out. Waiting for processes to exit. Jun 25 14:59:35.493790 systemd[1]: sshd@24-10.200.20.26:22-10.200.16.10:44556.service: Deactivated successfully. Jun 25 14:59:35.494704 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 14:59:35.496297 systemd-logind[1576]: Removed session 27. Jun 25 14:59:35.490000 audit[5948]: CRED_DISP pid=5948 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.531327 kernel: audit: type=1106 audit(1719327575.489:491): pid=5948 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.531463 kernel: audit: type=1104 audit(1719327575.490:492): pid=5948 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:35.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.26:22-10.200.16.10:44556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:40.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.26:22-10.200.16.10:44566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:40.570668 systemd[1]: Started sshd@25-10.200.20.26:22-10.200.16.10:44566.service - OpenSSH per-connection server daemon (10.200.16.10:44566). Jun 25 14:59:40.574819 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:40.574894 kernel: audit: type=1130 audit(1719327580.570:494): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.26:22-10.200.16.10:44566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:41.031000 audit[5970]: USER_ACCT pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.032253 sshd[5970]: Accepted publickey for core from 10.200.16.10 port 44566 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:41.054000 audit[5970]: CRED_ACQ pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.055393 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:41.060718 systemd-logind[1576]: New session 28 of user core. Jun 25 14:59:41.087135 kernel: audit: type=1101 audit(1719327581.031:495): pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.087183 kernel: audit: type=1103 audit(1719327581.054:496): pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.087220 kernel: audit: type=1006 audit(1719327581.054:497): pid=5970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 14:59:41.087248 kernel: audit: type=1300 audit(1719327581.054:497): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedb7d740 a2=3 a3=1 items=0 ppid=1 pid=5970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:41.054000 audit[5970]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffedb7d740 a2=3 a3=1 items=0 ppid=1 pid=5970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:41.086711 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 14:59:41.054000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:41.115563 kernel: audit: type=1327 audit(1719327581.054:497): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:41.092000 audit[5970]: USER_START pid=5970 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.138335 kernel: audit: type=1105 audit(1719327581.092:498): pid=5970 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.093000 audit[5997]: CRED_ACQ pid=5997 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.157342 kernel: audit: type=1103 audit(1719327581.093:499): pid=5997 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.462667 sshd[5970]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:41.463000 audit[5970]: USER_END pid=5970 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.466372 systemd-logind[1576]: Session 28 logged out. Waiting for processes to exit. Jun 25 14:59:41.467882 systemd[1]: sshd@25-10.200.20.26:22-10.200.16.10:44566.service: Deactivated successfully. Jun 25 14:59:41.468805 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 14:59:41.470507 systemd-logind[1576]: Removed session 28. Jun 25 14:59:41.463000 audit[5970]: CRED_DISP pid=5970 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.505463 kernel: audit: type=1106 audit(1719327581.463:500): pid=5970 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.505577 kernel: audit: type=1104 audit(1719327581.463:501): pid=5970 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:41.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.26:22-10.200.16.10:44566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:46.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.26:22-10.200.16.10:46408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:46.538718 systemd[1]: Started sshd@26-10.200.20.26:22-10.200.16.10:46408.service - OpenSSH per-connection server daemon (10.200.16.10:46408). Jun 25 14:59:46.542970 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:46.543068 kernel: audit: type=1130 audit(1719327586.537:503): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.26:22-10.200.16.10:46408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:46.965000 audit[6031]: USER_ACCT pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:46.966703 sshd[6031]: Accepted publickey for core from 10.200.16.10 port 46408 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:46.987000 audit[6031]: CRED_ACQ pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:46.989327 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:47.007692 kernel: audit: type=1101 audit(1719327586.965:504): pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.007796 kernel: audit: type=1103 audit(1719327586.987:505): pid=6031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.012652 systemd-logind[1576]: New session 29 of user core. Jun 25 14:59:47.044127 kernel: audit: type=1006 audit(1719327586.987:506): pid=6031 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jun 25 14:59:47.044167 kernel: audit: type=1300 audit(1719327586.987:506): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdf586c0 a2=3 a3=1 items=0 ppid=1 pid=6031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:46.987000 audit[6031]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdf586c0 a2=3 a3=1 items=0 ppid=1 pid=6031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:47.043630 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 14:59:46.987000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:47.052314 kernel: audit: type=1327 audit(1719327586.987:506): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:47.052418 kernel: audit: type=1105 audit(1719327587.049:507): pid=6031 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.049000 audit[6031]: USER_START pid=6031 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.051000 audit[6034]: CRED_ACQ pid=6034 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.094253 kernel: audit: type=1103 audit(1719327587.051:508): pid=6034 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.362027 sshd[6031]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:47.362000 audit[6031]: USER_END pid=6031 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.366092 systemd[1]: sshd@26-10.200.20.26:22-10.200.16.10:46408.service: Deactivated successfully. Jun 25 14:59:47.366985 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 14:59:47.386596 systemd-logind[1576]: Session 29 logged out. Waiting for processes to exit. Jun 25 14:59:47.362000 audit[6031]: CRED_DISP pid=6031 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.387786 systemd-logind[1576]: Removed session 29. Jun 25 14:59:47.405172 kernel: audit: type=1106 audit(1719327587.362:509): pid=6031 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.405263 kernel: audit: type=1104 audit(1719327587.362:510): pid=6031 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:47.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.26:22-10.200.16.10:46408 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:52.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.26:22-10.200.16.10:46414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:52.438705 systemd[1]: Started sshd@27-10.200.20.26:22-10.200.16.10:46414.service - OpenSSH per-connection server daemon (10.200.16.10:46414). Jun 25 14:59:52.442973 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:52.443044 kernel: audit: type=1130 audit(1719327592.437:512): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.26:22-10.200.16.10:46414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:52.864000 audit[6049]: USER_ACCT pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:52.865683 sshd[6049]: Accepted publickey for core from 10.200.16.10 port 46414 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:52.888358 kernel: audit: type=1101 audit(1719327592.864:513): pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:52.887000 audit[6049]: CRED_ACQ pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:52.889597 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:52.895566 systemd-logind[1576]: New session 30 of user core. Jun 25 14:59:52.946214 kernel: audit: type=1103 audit(1719327592.887:514): pid=6049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:52.946246 kernel: audit: type=1006 audit(1719327592.887:515): pid=6049 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jun 25 14:59:52.946265 kernel: audit: type=1300 audit(1719327592.887:515): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6ca76b0 a2=3 a3=1 items=0 ppid=1 pid=6049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:52.887000 audit[6049]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6ca76b0 a2=3 a3=1 items=0 ppid=1 pid=6049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:52.945767 systemd[1]: Started session-30.scope - Session 30 of User core. Jun 25 14:59:52.887000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:52.954484 kernel: audit: type=1327 audit(1719327592.887:515): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:52.951000 audit[6049]: USER_START pid=6049 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:52.978004 kernel: audit: type=1105 audit(1719327592.951:516): pid=6049 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:52.953000 audit[6052]: CRED_ACQ pid=6052 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:52.997437 kernel: audit: type=1103 audit(1719327592.953:517): pid=6052 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:53.275496 sshd[6049]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:53.275000 audit[6049]: USER_END pid=6049 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:53.278810 systemd[1]: sshd@27-10.200.20.26:22-10.200.16.10:46414.service: Deactivated successfully. Jun 25 14:59:53.279764 systemd[1]: session-30.scope: Deactivated successfully. Jun 25 14:59:53.300117 systemd-logind[1576]: Session 30 logged out. Waiting for processes to exit. Jun 25 14:59:53.275000 audit[6049]: CRED_DISP pid=6049 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:53.302013 systemd-logind[1576]: Removed session 30. Jun 25 14:59:53.318561 kernel: audit: type=1106 audit(1719327593.275:518): pid=6049 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:53.318670 kernel: audit: type=1104 audit(1719327593.275:519): pid=6049 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:53.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.26:22-10.200.16.10:46414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:58.348938 systemd[1]: Started sshd@28-10.200.20.26:22-10.200.16.10:38600.service - OpenSSH per-connection server daemon (10.200.16.10:38600). Jun 25 14:59:58.372071 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:59:58.372190 kernel: audit: type=1130 audit(1719327598.348:521): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.20.26:22-10.200.16.10:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:58.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.20.26:22-10.200.16.10:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:59:58.772000 audit[6061]: USER_ACCT pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:58.774024 sshd[6061]: Accepted publickey for core from 10.200.16.10 port 38600 ssh2: RSA SHA256:Qh+VlRb4ihBH/5ObdfYp2Cpy54J0tcWQRtPZz5VfSgo Jun 25 14:59:58.794000 audit[6061]: CRED_ACQ pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:58.796496 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:59:58.814712 kernel: audit: type=1101 audit(1719327598.772:522): pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:58.814847 kernel: audit: type=1103 audit(1719327598.794:523): pid=6061 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:58.827603 kernel: audit: type=1006 audit(1719327598.794:524): pid=6061 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jun 25 14:59:58.794000 audit[6061]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdd70620 a2=3 a3=1 items=0 ppid=1 pid=6061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:58.848751 kernel: audit: type=1300 audit(1719327598.794:524): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdd70620 a2=3 a3=1 items=0 ppid=1 pid=6061 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:59:58.794000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:58.856957 kernel: audit: type=1327 audit(1719327598.794:524): proctitle=737368643A20636F7265205B707269765D Jun 25 14:59:58.860559 systemd-logind[1576]: New session 31 of user core. Jun 25 14:59:58.863560 systemd[1]: Started session-31.scope - Session 31 of User core. Jun 25 14:59:58.867000 audit[6061]: USER_START pid=6061 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:58.868000 audit[6064]: CRED_ACQ pid=6064 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:58.909778 kernel: audit: type=1105 audit(1719327598.867:525): pid=6061 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:58.909872 kernel: audit: type=1103 audit(1719327598.868:526): pid=6064 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:59.188963 sshd[6061]: pam_unix(sshd:session): session closed for user core Jun 25 14:59:59.188000 audit[6061]: USER_END pid=6061 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:59.192671 systemd[1]: sshd@28-10.200.20.26:22-10.200.16.10:38600.service: Deactivated successfully. Jun 25 14:59:59.193584 systemd[1]: session-31.scope: Deactivated successfully. Jun 25 14:59:59.213269 systemd-logind[1576]: Session 31 logged out. Waiting for processes to exit. Jun 25 14:59:59.189000 audit[6061]: CRED_DISP pid=6061 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:59.214720 systemd-logind[1576]: Removed session 31. Jun 25 14:59:59.231551 kernel: audit: type=1106 audit(1719327599.188:527): pid=6061 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:59.231662 kernel: audit: type=1104 audit(1719327599.189:528): pid=6061 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jun 25 14:59:59.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.20.26:22-10.200.16.10:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 15:00:11.239765 systemd[1]: run-containerd-runc-k8s.io-71f1a1cd5a2532ed6e1b65cc184b766346c99954ff978bb8b266f42614b5662c-runc.0ajQaV.mount: Deactivated successfully. Jun 25 15:00:12.604673 systemd[1]: run-containerd-runc-k8s.io-d96b939a35578cf62c3191ef90e4e242f353a51fef95656751ca1ba002df1203-runc.XrB7PJ.mount: Deactivated successfully. Jun 25 15:00:13.273207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ee202b534874398dde640e7d6bf3e5a421222a64bba7174fd88575b2fef3e6-rootfs.mount: Deactivated successfully. Jun 25 15:00:13.275562 containerd[1604]: time="2024-06-25T15:00:13.275503470Z" level=info msg="shim disconnected" id=10ee202b534874398dde640e7d6bf3e5a421222a64bba7174fd88575b2fef3e6 namespace=k8s.io Jun 25 15:00:13.275562 containerd[1604]: time="2024-06-25T15:00:13.275561109Z" level=warning msg="cleaning up after shim disconnected" id=10ee202b534874398dde640e7d6bf3e5a421222a64bba7174fd88575b2fef3e6 namespace=k8s.io Jun 25 15:00:13.275929 containerd[1604]: time="2024-06-25T15:00:13.275571068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 15:00:14.088388 kubelet[3039]: I0625 15:00:14.087863 3039 scope.go:117] "RemoveContainer" containerID="10ee202b534874398dde640e7d6bf3e5a421222a64bba7174fd88575b2fef3e6" Jun 25 15:00:14.091108 containerd[1604]: time="2024-06-25T15:00:14.091063484Z" level=info msg="CreateContainer within sandbox \"1096f15f23a7594fed4f68e531433007b21be568c1bf61158211775c4b5adc78\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 15:00:14.120985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236235075.mount: Deactivated successfully. Jun 25 15:00:14.130138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269498506.mount: Deactivated successfully. Jun 25 15:00:14.139840 containerd[1604]: time="2024-06-25T15:00:14.139793103Z" level=info msg="CreateContainer within sandbox \"1096f15f23a7594fed4f68e531433007b21be568c1bf61158211775c4b5adc78\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a6064dfb40157ba1c003e3a7ce6eb2f7e4a10603252fe64353854786c8cbf711\"" Jun 25 15:00:14.140395 containerd[1604]: time="2024-06-25T15:00:14.140370811Z" level=info msg="StartContainer for \"a6064dfb40157ba1c003e3a7ce6eb2f7e4a10603252fe64353854786c8cbf711\"" Jun 25 15:00:14.192045 containerd[1604]: time="2024-06-25T15:00:14.191565299Z" level=info msg="StartContainer for \"a6064dfb40157ba1c003e3a7ce6eb2f7e4a10603252fe64353854786c8cbf711\" returns successfully" Jun 25 15:00:14.308676 containerd[1604]: time="2024-06-25T15:00:14.308110219Z" level=info msg="shim disconnected" id=13e7f1820ad12d65101e2d75690f4932b194ed721d74e068d5e82ccdc2a6fa42 namespace=k8s.io Jun 25 15:00:14.308676 containerd[1604]: time="2024-06-25T15:00:14.308181738Z" level=warning msg="cleaning up after shim disconnected" id=13e7f1820ad12d65101e2d75690f4932b194ed721d74e068d5e82ccdc2a6fa42 namespace=k8s.io Jun 25 15:00:14.308676 containerd[1604]: time="2024-06-25T15:00:14.308190738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 15:00:15.092472 kubelet[3039]: I0625 15:00:15.092443 3039 scope.go:117] "RemoveContainer" containerID="13e7f1820ad12d65101e2d75690f4932b194ed721d74e068d5e82ccdc2a6fa42" Jun 25 15:00:15.094792 containerd[1604]: time="2024-06-25T15:00:15.094751528Z" level=info msg="CreateContainer within sandbox \"b48b977844deee89cd16a6856e40853d39399df977acb6f6d9cbdc862620e171\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 15:00:15.116740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13e7f1820ad12d65101e2d75690f4932b194ed721d74e068d5e82ccdc2a6fa42-rootfs.mount: Deactivated successfully. Jun 25 15:00:15.244731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617285704.mount: Deactivated successfully. Jun 25 15:00:15.251401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352001950.mount: Deactivated successfully. Jun 25 15:00:15.385066 containerd[1604]: time="2024-06-25T15:00:15.384745396Z" level=info msg="CreateContainer within sandbox \"b48b977844deee89cd16a6856e40853d39399df977acb6f6d9cbdc862620e171\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f0b57a56adc81d9775d8706627fb59792a9dc25760f4e92e14a75137f39479ec\"" Jun 25 15:00:15.385546 containerd[1604]: time="2024-06-25T15:00:15.385505060Z" level=info msg="StartContainer for \"f0b57a56adc81d9775d8706627fb59792a9dc25760f4e92e14a75137f39479ec\"" Jun 25 15:00:15.442594 containerd[1604]: time="2024-06-25T15:00:15.442432560Z" level=info msg="StartContainer for \"f0b57a56adc81d9775d8706627fb59792a9dc25760f4e92e14a75137f39479ec\" returns successfully" Jun 25 15:00:17.716303 kubelet[3039]: E0625 15:00:17.716167 3039 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3815.2.4-a-2c7c8223bb.17dc47597531cfcc", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3815.2.4-a-2c7c8223bb", UID:"03ab144c7a2b275235aadba34ec9763f", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3815.2.4-a-2c7c8223bb"}, FirstTimestamp:time.Date(2024, time.June, 25, 15, 0, 7, 292547020, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 15, 0, 7, 292547020, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3815.2.4-a-2c7c8223bb"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.26:44364->10.200.20.41:2379: read: connection timed out' (will not retry!) Jun 25 15:00:19.331002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f57b3d40949d8c1eef6865aca3b29025d8dbce6684671ff66266ecd2ac17292-rootfs.mount: Deactivated successfully. Jun 25 15:00:19.333109 containerd[1604]: time="2024-06-25T15:00:19.333043230Z" level=info msg="shim disconnected" id=4f57b3d40949d8c1eef6865aca3b29025d8dbce6684671ff66266ecd2ac17292 namespace=k8s.io Jun 25 15:00:19.333541 containerd[1604]: time="2024-06-25T15:00:19.333514620Z" level=warning msg="cleaning up after shim disconnected" id=4f57b3d40949d8c1eef6865aca3b29025d8dbce6684671ff66266ecd2ac17292 namespace=k8s.io Jun 25 15:00:19.333655 containerd[1604]: time="2024-06-25T15:00:19.333639698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 15:00:19.404432 kubelet[3039]: E0625 15:00:19.404396 3039 controller.go:193] "Failed to update lease" err="Put \"https://10.200.20.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.4-a-2c7c8223bb?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 15:00:19.510274 kubelet[3039]: E0625 15:00:19.510244 3039 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.26:42428->10.200.20.41:2379: read: connection timed out" Jun 25 15:00:20.106551 kubelet[3039]: I0625 15:00:20.106521 3039 scope.go:117] "RemoveContainer" containerID="4f57b3d40949d8c1eef6865aca3b29025d8dbce6684671ff66266ecd2ac17292" Jun 25 15:00:20.109212 containerd[1604]: time="2024-06-25T15:00:20.109134769Z" level=info msg="CreateContainer within sandbox \"422822c6866a635bb2a24a2d0dbd727788cdf00f6b5e4a3a2760cbb802457951\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 15:00:20.135932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282974351.mount: Deactivated successfully. Jun 25 15:00:20.152611 containerd[1604]: time="2024-06-25T15:00:20.152550751Z" level=info msg="CreateContainer within sandbox \"422822c6866a635bb2a24a2d0dbd727788cdf00f6b5e4a3a2760cbb802457951\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"218ba27bac1266e360eeed37333ddddbce943358470c4763ed83279e7a939320\"" Jun 25 15:00:20.154214 containerd[1604]: time="2024-06-25T15:00:20.154163679Z" level=info msg="StartContainer for \"218ba27bac1266e360eeed37333ddddbce943358470c4763ed83279e7a939320\"" Jun 25 15:00:20.219732 containerd[1604]: time="2024-06-25T15:00:20.219556027Z" level=info msg="StartContainer for \"218ba27bac1266e360eeed37333ddddbce943358470c4763ed83279e7a939320\" returns successfully" Jun 25 15:00:24.442579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.460546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.477215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.487990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.488322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.512634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.512962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.538300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.538670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.564237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.564569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.564757 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.581930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.590720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.599545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.608633 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.618671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.628367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.638296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.648315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.657358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.666736 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.676404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.685311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x28 status: scsi 0x2 srb 0x4 hv 0xc0000001 Jun 25 15:00:24.695046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#11 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001